Addressing Hallucinations - LLMs You Can Trust with Scott Cohen | Disciplined Troublemakers

preview_player
Показать описание
Welcome back to Disciplined Troublemakers! In this episode, we'll dive into the AI world with Scott Cohen from Jaxon AI, known for groundbreaking work with the Department of Defense. Scott explores AI's ROI, scalability, and the buzz around AI hallucination. Uncover Jaxon AI's unique strategies and the importance around human creativity in the age of advanced AI. Discover their commercial shift and funding goals, bringing AI capabilities to the forefront. Tune in to learn how Jaxon AI combines innovation with confidence for the future of AI.

Quotes:

"If we solely relied on AI, the spark of creativity would be lost. Instead, AI's true power lies in empowering us to reach new heights." - Scott Cohen

"Maintaining a human element in AI operations ensures guidance towards tasks and broader objectives." - Scott Cohen

Featured in this Episode:

Scott Cohen

Chapters:

00:00 - Introduction
04:04 - Focus on Specific Industries for AI
09:46 - Verification for Token Prediction Models
11:35 - Hallucinations vs Inaccuracies in Transformer Models
15:39 - Efficient AI Building with DETAIL Tool Chain
19:39 - Cooking Skills: Learning and Adapting
20:19 - Enterprise Costs and Impact Inquiry
24:27 - Human-Driven Decision-Making
28:12 - Scott Shagory Interviews Scott Cohen on Fundraising
32:04 - Commercial vs. Government: Value Propositions
34:46 - Balancing Human Uniqueness with AI in Business
37:03 - Human-AI Communication in the Workplace
41:53 - Collaboration Issues in AI Project Failures
45:10 - Navigating Organizational Hierarchy Challenges
47:07 - Importance of Educational Outreach in Data Science Teams
52:50 - Conclusion

Produced by Heartcast Media
Рекомендации по теме