
Shivay Lamba: How to run secure AI anywhere with WebAssembly
Episode · 0 Play
Episode · 1:33:49 · Jun 23, 2025
About
Links- CodeCrafters (partner): https://tej.as/codecrafters- WebAssembly on Kubernetes: https://www.cncf.io/blog/2024/03/12/webassembly-on-kubernetes-from-containers-to-wasm-part-01/- Shivay on X: https://x.com/howdevelop- Tejas on X: https://x.com/tejaskumar_SummaryIn this podcast episode, Shivay Lamba and I discuss the integration of WebAssembly with AI and machine learning, exploring its implications for developers. We dive into the benefits of running machine learning models in the browser, the significance of edge computing, and the performance advantages of WebAssembly over traditional serverless architectures. The conversation also touches on emerging hardware solutions for AI inference and the importance of accessibility in software development. Shivay shares insights on how developers can leverage these technologies to build efficient and privacy-focused applications.Chapters00:00 Shivay Lamba03:02 Introduction and Background06:02 WebAssembly and AI Integration08:47 Machine Learning on the Edge11:43 Privacy and Data Security in AI15:00 Quantization and Model Optimization17:52 Tools for Running AI Models in the Browser32:13 Understanding TensorFlow.js and Its Architecture37:58 Custom Operations and Model Compatibility41:56 Overcoming Limitations in JavaScript ML Workloads46:00 Demos and Practical Applications of TensorFlow.js54:22 Server-Side AI Inference with WebAssembly01:02:42 Building AI Inference APIs with WebAssembly01:04:39 WebAssembly and Machine Learning Inference01:10:56 Summarizing the Benefits of WebAssembly for Developers01:15:43 Learning Curve for Developers in Machine Learning01:21:10 Hardware Considerations for WebAssembly and AI01:27:35 Comparing Inference Speeds of AI Models Hosted on Acast. See acast.com/privacy for more information.
1h 33m 49s · Jun 23, 2025
© 2025 Acast AB (OG)