Show HN: Agentic AI Frameworks on AWS (LangGraph,Strands,CrewAI,Arize,Mem0)

Hacker News - AI
Aug 2, 2025 00:20
thinkagenticai
1 views
hackernewsaidiscussion

Summary

AWS has released open-source reference implementations demonstrating how to build production-ready Agentic AI applications using frameworks like LangGraph, Strands, CrewAI, and Arize. The repository showcases advanced workflows for retrieval-augmented generation (RAG), memory, planning, observability, and evaluation, leveraging AWS services such as Bedrock and Step Functions. This initiative provides valuable resources for developers aiming to create robust, scalable agentic AI systems on AWS infrastructure.

We’ve published a set of open-source reference implementations on how to build production-grade Agentic AI applications on AWS. What’s in the repo: • Agentic RAG, memory, and planning workflows with LangGraph & CrewAI • Strands-based flows with observability using OTEL & Arize • Evaluation with LLM-as-judge and cost/performance regressions • Built with Bedrock, S3, Step Functions, and more GitHub: https://github.com/aws-samples/sample-agentic-frameworks-on-... Would love your thoughts — feedback, issues, and stars welcome! Comments URL: https://news.ycombinator.com/item?id=44763850 Points: 1 # Comments: 0

Related Articles

Why Cold Wallet’s $0.00942 Presale Is Gaining Attention While XRP Charts Stall & Stellar Fails to Incentivise Users

Analytics InsightAug 2

The article highlights growing interest in Cold Wallet’s $0.00942 presale, attributing its momentum to increased concerns over digital asset security and innovative technology. In contrast, established cryptocurrencies like XRP and Stellar are struggling to engage users and maintain growth. The trend underscores a shift toward advanced, security-focused solutions, which could influence future AI-driven developments in digital asset management.

WebGPU enables local LLM in the browser. Demo site with AI chat

Hacker News - AIAug 2

A new demo site showcases how WebGPU technology allows large language models (LLMs) to run locally within web browsers, enabling AI chat without server-side processing. This advancement highlights the potential for more private, efficient, and accessible AI applications directly in users' browsers, reducing reliance on cloud infrastructure.

The Parallel Lives of an AI Engineer

Hacker News - AIAug 2

The article "The Parallel Lives of an AI Engineer" explores the dual nature of an AI engineer's work, balancing rapid technological advancements with the practical challenges of implementation in real-world systems. It highlights the tension between innovation and stability, emphasizing the need for engineers to adapt quickly while maintaining reliable products. This duality underscores the evolving demands and complexities faced by professionals in the AI field.