Episodes

Tuesday Mar 11, 2025
Neural Compression of Atmospheric States
Tuesday Mar 11, 2025
Tuesday Mar 11, 2025
Can AI revolutionize climate research? In this episode, we sit down with Piotr Mirowski from Google DeepMind to explore groundbreaking research that slashes the amount of data needed for climate modeling—without losing the crucial details. The compression ratio they’ve achieved is astonishing, but the real challenge? Preserving rare, high-impact events like typhoons. Get it wrong, and the data becomes useless for predicting exactly the disasters we most need to understand. Listen to find out how AI is revolutionising the way huge climate science datasets are lowering one of the barriers to working in this field.
Paper: [2407.11666] Neural Compression of Atmospheric States
Guests:
Piotr Mirowski, Senior Staff Research Scientist, Google DeepMind
PhD in computer science in 2011 at New York University, with a thesis on “Time Series Modeling with Hidden Variables and Gradient-based Algorithms” supervised by Prof. Yann LeCun. Areas of academic focus include navigation-related research, on scaling up autonomous agents to real world environments, on weather and climate forecasting and now on human–centered AI, and the use of AI for artistic human and machine-based co-creation.
Chapters:
00:00 Introduction
01:23 Aye Aye Fact of the Day
02:20 The Evolution of AI and Personal Experiences
08:31 AI over the last 15 years
10:50 Weather research and Climate Change
13:56 Understanding Data Volume: The Petabyte Challenge
18:21 Modelling Climate: The Complexities of Variables
20:11 The Cost of Climate Science: Data and Resources
26:16 Compression Techniques: Lossy vs Lossless
40:30 Neural Compression: A New Frontier in Data Handling
45:15 Understanding Compression Representations in AI
48:34 Challenges of Representing Spherical Data
56:21 Applying Compression Techniques to Other Data Sets
59:05 Lightning Round
1:03:51 Close out
Music: "Fire" by crimson.

Wednesday Feb 12, 2025
To Err is AI
Wednesday Feb 12, 2025
Wednesday Feb 12, 2025
Episode 4 – To Err is AI
This episode delves into the challenges users face in determining the trustworthiness of AI systems, especially when performance feedback is limited. The researchers describe a debugging intervention to cultivate a critical mindset in users, enabling them to evaluate AI advice and avoid both over-reliance and under-reliance, and we discuss the counter-intuitive ways that humans react to AI.
Paper:
To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems, arXiv:2409.14377 [cs.AI]
Guests:
Gaole He, PhD Student
Ujwal Gadiraju, Assistant Professor
Both at the Web Information Systems group of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS/EWI), Delft University of Technology
Chapters:
00:00 Introduction
00:40 Aye Aye Fact of the Day
01:46 Understanding overreliance and under reliance on AI
02:26 The socio-technical dynamics of AI adoption
04:59 The role of familiarity and domain knowledge in AI use
07:18 The evolution of technology and it impact on trust
10:00 Challenges in AI transparency and trustworthiness
11:33 Background of the paper
12:56 The experiment: Over and under reliance
14:16 Human perception and AI accuracy
18:16 The Dunning-Kruger effect in AI interaction
20:53 Explaining AI: The double-edged sword
23:43 Building warranted trust in AI systems
31:59 Breaking down the Dunning-Kruger effect
39:18 Future research
41:49 Advice to AI product owners
45:45 Lightning Round – Can Transformers get us to AGI?
48:58 Lightning Round – Should we keep training LLM’s?
52:01 Lightning Round – Who should we follow?
54:38 Likelihood of an AI apocalypse?
58:10 Lightening Round – Recommendations for tools or techniques
1:00:48 Close out
Music: "Fire" by crimson.

Tuesday Jan 14, 2025
Indirect Prompt Injection: Generative AI's Greatest Security Flaw
Tuesday Jan 14, 2025
Tuesday Jan 14, 2025
In this episode we discuss the critical security flaw of indirect prompt injection in generative AI (GenAI) systems. Our guests explain how attackers can manipulate these systems by inserting malicious instructions into the data they access, such as emails and documents. This can lead to various issues, including disinformation, phishing attacks and denial of service. They also emphasize the importance of data hygiene, user training and technical safeguards to mitigate these risks, and they further discuss how the integration of large language models (LLMs) into organizational systems increases the attack surface. In summary RAG is vulnerable unless you take strong mitigating actions.
Paper:
Indirect Prompt Injection: Generative AI’s Greatest Security Flaw | Centre for Emerging Technology and Security
Guests:
Chris Jefferson , CEO AdvAI, https://www.linkedin.com/in/chris-jefferson-3b43291a/
Matt Sutton, https://www.linkedin.com/in/matthewsjsutton/
Chapters:
00:00 Introduction
01:48 Understanding RAG and it’s vulnerabilities
04:42 The significance of Indirect Prompt Injection
07:28 Attack vectors and real-world implications
10:04 Mitigation strategies for indirect prompt injection
12:45 The future of AI security and agentic processes
28:27 The risks and rewards of agentic design
33:50 Navigating phishing in AI systems
35:53 The role of public policy in AI safety
41:55 Automating risk analysis in AI
44:44 Future research directions in AI risks
48:08 Reinforcement learning agents and automation
48:53 AI in cybersecurity: attacking and defending
50:21 The ethics and risks of AI technology
52:51 The lightning Round
1:01:53 Outro
Music: "Fire" by crimson.

Tuesday Nov 12, 2024
Open and remotely accessible Neuroplatform for research in wetware computing
Tuesday Nov 12, 2024
Tuesday Nov 12, 2024
In this episode of the Aye Aye AI podcast, we delve into the revolutionary field of wetware computing. Dr. Fred Jordan, CEO of FinalSpark, shares his journey from traditional computer science to exploring the efficiency of organic neurons over silicon computers. Discover the parallels between this emerging field and the early days of machine learning, AI and quantum computing. Could wetware computing be the solution to the massive energy demands of data centers?
Paper:
Open and remotely accessible Neuroplatform for research in wetware computing
Guest:
Dr Fred Jordan – CEO FinalSpark, (LinkedIn)
(Note: Co-authors Martin Kutter, Jean-Marc Comby and Flora Brozzi were unable to join us)
Links discussed:
Live - FinalSpark
https://lloydwatts.com/images/wholeBrain_007.jpg
Chapters:
0:13 Podcast Introduction
1:50 Summary of the Paper
3:44 Introducing Dr. Fred Jordan
4:25 Fred's Background and FinalSpark
7:11 Understanding Brain Organoids
10:20 Building the Team
12:13 Energy Efficiency in Research
13:43 Comparing Neural Systems
16:03 Exploring Training Mechanisms
17:29 The Nature of Brain Tissue
20:00 Accessing Research Data
26:57 Projects in Progress
28:43 The Evolution of Biocomputing
32:34 Future of Wetware Computing
37:59 The Ethics of Wetware
42:11 Hopes for the Future
43:38 Lightning Round Questions
47:37 Conclusion and Farewell
Music credits : "Fire" by crimson.

Tuesday Oct 08, 2024
Persuasion Games using Large Language Models
Tuesday Oct 08, 2024
Tuesday Oct 08, 2024
In this episode of Aye Aye AI, Christian and Arijit explore how large language models (LLMs) can actively shape user decisions in areas like investments and insurance. Joined by leading AI researchers Shirish and Ganesh, they discuss the groundbreaking use of multi-agent frameworks and how emotions impact persuasion. Learn how AI can influence, resist, and even adapt in real-time interactions, offering a glimpse into the future of AI-driven persuasion in business. Don't miss this deep dive into the evolving role of AI in decision-making
Paper:
https://arxiv.org/abs/2408.15879
Guests:
Shirish Karande – Principal Scientist and Head of Media & Advertising Research Area at TCS, Shirish Karande | LinkedIn
Ganesh Prasath Ramani – Associate Director – Generative AI at Cognizant, Ganesh Prasath Ramani | LinkedIn
(Co-authors Santhosh V, Yash Bhatia were not able to join us on the podcast)
Chapters
0:06 Introduction to Aye Aye AI Podcast
1:00 Exploring Persuasion Games with LLMs
2:35 Meet the Authors
3:31 Origins of the Research
8:27 Multi-Agent Framework Explained
10:00 User Resistance Strategies
11:18 The Role of Emotions in Persuasion
12:54 Evaluating LLMs vs. Human Responses
27:54 Real-World Applications Beyond E-commerce
33:59 Ethical Considerations in Persuasion Technology
43:45 Future Directions of Research
50:09 The Challenge of Grounding Personalities
50:42 Lightning Round: Quick Questions
57:15 Conclusion and Farewell
Music credits : "Fire" by crimson.

Tuesday Sep 24, 2024
Introduction to the Aye Aye AI Podcast
Tuesday Sep 24, 2024
Tuesday Sep 24, 2024
Arijit and Christian introduce you to the Aye Aye AI Podcast and start to introduce our mascot the delightful Aye Aye.

Your Hosts
Arijit Sircar is a senior data scientist with a deep history of implementing data centric solutions within financial services and an AGI enthusiast.
Christian Hull is a product guy, nerd, public speaker, innovator and is fascinated by emerging technology and AI, he's also spent a career delivering change and digital transformation in financial services.