About
Justin Baird
FRSA — Fellow, Royal Society of the Arts
AI systems builder working at the intersection of multimodal AI, robotics, and developer ecosystems.
I build embodied AI systems combining computer vision, language models, diffusion models, robotics, and real-time signal processing to produce physical generative outputs. My research and experimental systems have resulted in NeurIPS publications, conference installations viewed by thousands, and open-source projects exploring human-AI creative collaboration.
My career has spanned launching developer platforms at Google — including inventing Underwater Street View and delivering early Android demos alongside Andy Rubin — to leading technology innovation at Accenture, Microsoft, and DHL across Asia Pacific. I collaborated with Google Brain’s Magenta team on a musical AI robot for the Sydney Opera House, and conceived Sophia the Robot’s AI singing voice for her duet with Jimmy Fallon.
Currently I'm co-founding Neuroflix (a generative AI video learning platform that won the Google Gemini Hackathon Singapore), pursuing doctoral research in Generative AI at Golden Gate University, and leading the Embodied Agents Lab at SQ Collective in Singapore — weekly hands-on sessions building AI agents that perceive, decide, and act in physical environments.
I’m focused on translating frontier AI research into developer reference implementations, technical demonstrations, and educational content that help engineers understand and build with emerging AI technologies.

Areas of Work
Embodied AI & Robotics
Building robotic systems that combine computer vision, language models, diffusion models, and real-time signal processing to produce physical generative outputs.
Generative & Multimodal AI
Developing cross-modal AI pipelines for Synthaesthetic Art — translating between sound, image, and motion through generative models and robotic systems.
Developer Ecosystems
Translating frontier AI research into developer reference implementations, technical demonstrations, and educational content that help engineers build with emerging AI technologies.
AI Research & Publications
NeurIPS and ACM HAI published research exploring human-machine creative collaboration, real-time interactive systems, and cross-modal generative AI.
Education
Patents
Experience
Generative AI platform converting enterprise knowledge into dynamic video learning experiences. Won the Google Gemini Hackathon Singapore and evolved the platform into a commercial AI product.
Researching generative AI systems, with focus on multimodal pipelines and cross-modal creative AI.
Delivered architecture guidance and workshops for startups adopting AI and cloud technologies. Supported founders building scalable AI platforms on Azure infrastructure.
Technology platform supporting ecosystem restoration and carbon markets. Architected Verity Carbon, a digital MRV platform for carbon credit verification with ARIPO patent-pending data authentication.
Developed AI systems including drone-based inspection tools. Collaborated with Google Brain's Magenta team on a musical AI robot for the Sydney Opera House. Lead speaker for Accenture Technology Vision.
Built development and data science teams delivering analytics platforms across logistics operations. Conceived and secured funding for the Carbon Twin environmental decision-making platform.
Launched YouTube across Asia Pacific. Delivered early Android developer demos alongside Andy Rubin. Invented and patented Underwater Street View (featured in Chasing Coral). Introduced Street View Trike in Australia.