AI Systems Builder
I build autonomous agents, media generation pipelines, and interactive video systems — from architecture to production deployment.
I built a new kind of video — narrated by AI, animated with choreographed timing, and fully interactive in the browser. Not sealed pixels. Live HTML where viewers click, drag, and answer. It required solving 6 problems with no prior art: sub-45ms audio sync, DOM isolation across slides, AI-directed animation choreography, presentation-aware voice shaping, in-video data collection at 2.7x survey completion rates, and a 58-tool agent pipeline that produces finished video from a text prompt.
Core science team. Developed evaluation frameworks that determined production readiness for Amazon's image and video generation models. Launched December 2024.
10+ papers, h-index 9. Built systems to quantify gender representation in Hollywood at scale.
Google Scholar →58 MCP tools expose the full pipeline as an API. Any AI agent can go from a text prompt to a playable narrated interactive video in a single autonomous session. 6 services on AWS ECS Fargate with service discovery, SQS queues, and PostgreSQL.
Autonomous agents that handle complex workflows — research, decisions, multi-step execution. I've built a 58-tool MCP server that runs full video creation pipelines without human intervention.
Good fit if you have a multi-step workflow that needs AI automation.
End-to-end pipelines for generating and delivering media at scale. Video, audio, interactive content, TTS. I evaluated the models that became Amazon Nova Canvas and Nova Reels.
Good fit if you need AI to produce content at scale.
You have a problem, I build the product. Model selection, pipeline architecture, cloud infrastructure, frontend. Concept to deployed product.
Good fit if you need an AI product built and don't have a technical team.
Narrated, animated, interactive video on the INX platform. Live HTML inside what feels like video — calculators, forms, widgets, 3D viewers.
Good fit if you want interactive, personalized video content.
For investors evaluating AI startups. I assess architecture, model choices, scalability, and defensibility. Built at Amazon scale, published peer-reviewed research. I can tell you if the AI is real.
Good fit if you're a VC or investor evaluating an AI company's technical foundation.
Created INX — the interactive narrated video platform. Built the entire stack solo. Taking on select consulting projects in parallel.
Inception science team. Responsible AI for LLMs, then video/image generation evaluation. Core team that launched Nova Canvas & Nova Reels.
Video understanding research. 10+ papers. Geena Davis Institute — quantifying gender representation in media. Viterbi Fellowship.
Electrical Engineering, dual degree. Gold Medal — best master's thesis across all departments. GATE Fellowship.
10+ peer-reviewed papers across video understanding, multimodal AI, and media analysis. Google Scholar →
Cross-modal identity association for detecting who's speaking in video. Unsupervised audio-visual framework.
ICASSP '20Convolutional LSTM for tracking articulatory boundaries in real-time MRI video of speech production.
Automated analysis of child-interlocutor dynamics from video for ASD behavioral characterization.
Saliency map generation from gaze data to characterize visual patterns in CVI subjects.
CNN + attention-LSTM predicting TED talk popularity from visual cues. End-to-end trainable with interpretable attention.
Trajectory clustering for pixel-wise motion pattern segmentation in high-density crowd video.
Have something worth building?
I take on a limited number of projects. Tell me about yours and I'll get back to you within 48 hours.