Block Launches Goose—Your Open-Source AI Agent Jack Dorsey’s Block is unleashing “Goose,” an open-source AI agent ready to handle your toughest tasks! Say hello to Goose—a next-level framework designed for developers to create and deploy AI assistants on practically any platform. Seamless integration with top-tier LLMs (like Anthropic, OpenAI, and DeepSeek) means data stays private, while Goose automates tasks like code migrations and dependency management mid-session. If Block’s track record with Square and Cash App is any indication, Goose could be the next disruptive force in AI. Don’t sleep on this—open source often means faster innovation and broader impact. Key takeaway: Want in on the agent revolution? Goose might just be your golden egg.
Faheem Naseer’s Post
More Relevant Posts
-
At Empower, AI isn’t just a buzzword — it’s how we’re building better financial products. Senior Credit Manager, Aditya Sharma, recently spoke with Built In on how our team incorporates AI/ML technology into our data science. Check out the article to learn how we’re leveraging this emerging tech to accelerate our testing, time to market, and more. https://lnkd.in/eCy6S7c8
To view or add a comment, sign in
-
LLM OS: The Future of AI-Driven Operating Systems As we move deeper into the age of AI, we're witnessing the rise of a revolutionary concept—LLM OS (Large Language Model Operating Systems). Imagine an operating system powered by the intelligence of AI, capable of understanding, adapting, and interacting with users like never before. LLM OS is not just about running applications; it's about creating a seamless, intelligent layer that can interpret natural language, automate complex workflows, and learn from every interaction. This next-generation OS transforms the way humans interact with machines by offering context-aware experiences, from streamlining business operations to enabling personal assistants that truly understand the user’s intent. At DevOpzLabs, we believe the future of technology lies in this AI-native infrastructure. With LLM OS, the boundaries between traditional computing and AI-driven decision-making are blurred. Picture an ecosystem where AI not only assists but collaborates with you, optimizing performance, predicting needs, and integrating insights into real-time actions. This is the future we’re working towards—one where LLM OS empowers businesses, accelerates innovation, and provides intelligent automation at an unprecedented scale. The possibilities are endless, and we’re just at the beginning of this exciting journey. Are you ready to embrace the future of AI with LLM OS?
To view or add a comment, sign in
-
Many assumed that the industry would be well on its way to AGI or superintelligence before AI model builders’ efforts to take advantage of scaling laws might hit a wall. Recent headlines are chipping away at that assumption. One question I've been getting in my inbox: is this the era of diminishing returns for AI? We don't think so. Even if an avalanche of new clusters and synthetic data fail to deliver significantly better model performance, there are good reasons to believe that AI is still going to become orders of magnitude more powerful in the coming years — powering a new generation of startups and ultimately productivity growth and positive benefits for mankind. In fact, with agents and reasoning just beginning to scale and thousands of industry-specific models coming online, it's about to get really exciting. Read our latest blog post on the topic: https://lnkd.in/gFhniNX8
To view or add a comment, sign in
-
OpenAI just launched o3-mini—its most cost-efficient AI model yet. A small but mighty model, o3-mini uses train-of-thought reasoning to tackle science, math, and coding tasks with surprising accuracy. In some cases, it even outperforms larger models like o1. For solopreneurs and business owners, this means: ↳Better AI at a lower cost ↳More control over model effort (and expenses) ↳Smarter automation for complex workflows Available now in ChatGPT and API. Free users get a taste, but Pro and Plus subscribers get real access. This isn't just another AI update—it’s a glimpse into the future of smarter, leaner AI. How will you use o3-mini to level up your business? Let me know in the comments.
To view or add a comment, sign in
-
-
🚀 Exciting developments ahead! The use of #AIAgents is revolutionizing how we approach data and technology. At Observata AB, we're integrating LangChain's cutting-edge AI capabilities with Elastic's robust search engine in our #HYPRSeek Service to deliver next-level search solutions. 🤖✨ This progress brings: 🔍 Smarter Data Retrieval: AI-driven insights paired with powerful search technology. 💡 Enhanced Solutions: More accurate, efficient, and tailored outcomes for our customers. Stay tuned as we continue to innovate and transform industries with these advancements! #AI #Innovation #AIAgents #SearchTechnology #CustomerSuccess #Observata #HYPRServices #HYPRSeek
How we built Automatic Import, Attack Discovery, and Elastic AI Assistant using LangChain
elastic.co
To view or add a comment, sign in
-
Yesterday I had COUNTLESS conversations and emails and two press inquires on my thoughts on DeepSeek 🤯…So here goes ➡️ Instead of building an AI capable of “everything, everywhere,” DeepSeek specialized in niche domains over generalized intelligence. The algorithms weren’t just trained on vast datasets; they were also reworked with human oversight in ways the major players have not fully explored. BUT some early indications are that they may have distilled, "borrowed" training data from the major LLM's like OpenAI, Gronk X, and Anthropic. If this is the case, could be major legal issues.... But here’s why the big players—ChatGPT, Claude, and X—still lead and will likely win in the long run: 1. Scale and Versatility: Big AI excels at adapting to broad, dynamic needs. Their models process diverse data at massive scales, enabling unparalleled versatility for businesses and consumers alike. 2. Ecosystem Dominance and Security: These platforms are embedded across countless tools and workflows, security protocols, data governance all offering seamless integration that smaller players like DeepSeek can’t match. 3. Iterative Improvement: Real-time feedback from billions of interactions fuels constant evolution. This dynamism ensures their relevance and competitiveness as new use cases emerge. Interestingly, Big AI firms as I type this will be incorporating some of DeepSeek’s comcepts with domain-specific fine-tuning, and edge-based deployment into their products. This all plays out in the realm of GenAI, but the next wave of real value will come from Traditional (Core) AI. Traditional AI—optimized for efficiency, decision-making, and automation—will drive transformation in logistics, retail,manufacturing, healthcare, and beyond. The question isn’t whether GenAI or Core AI will dominate but how the best of both worlds can converge to solve the toughest challenges ahead. With DeepSeek honestly I am far more concerned about security, legal issues, and privacy than stated performance metrics..... For now, if I were a business I would steer clear of DeepSeek
To view or add a comment, sign in
-
DeepSeek demonstrated how base model improvements strengthen the Personal AI tech stack while bringing costs down. Our focus is on memory-based small language models that are contextually aware and grounded in proprietary data that foundation models cannot access. This need for context awareness is especially critical in human-capital intensive businesses; from legal and healthcare to bakeries and cafes. With our recent MODEL-3 release, we announced multi-memory; the ability for a single small language model (or "Persona") to carry multiple conversations simultaneously with different conversational contexts. The feeling this gives is similar to going to your favorite coffee shop where the barista already knows your drink and simply confirms the order with you. It's a level of context and understanding that matters at the individual level. I'm excited about further optimizing this in MODEL-3, benefiting from innovations like DeepSeek, and bringing our multi-memory capabilities to the mass market (see video).
To view or add a comment, sign in
-
🎉 Excited to Kick Off 2025 with My First Blog Post on LLMs, AI Agents, and Functional Calling! As we dive deeper into the transformative world of AI, I’m thrilled to share my latest exploration: the theoretical and practical implementation of AI agents and functional calling using LLMs. In this blog, I delve into how LLMs can act as operating systems and intermediaries, paving the way for innovative solutions. 🛠️ Highlights from the Blog: 1️⃣ AI Agents with Financial Data Built a simple AI agent leveraging function calling and word matching based on user input. Integrated the Yahoo Finance API to fetch relevant financial data dynamically. 2️⃣ OpenAI API Functional Calling Defined functions in a dictionary format to dynamically match and retrieve attributes based on user queries. Demonstrated how user inputs can trigger function-based outputs for seamless interactions. 3️⃣ Looking Ahead: Discuss how machine learning techniques can be incorporated for forecasting and predictive insights in future implementations. This blog is perfect for anyone passionate about AI, functional programming, and practical LLM applications. Whether you're just starting with AI or looking for advanced use cases, you’ll find something valuable. 💡 Why This Blog Matters: LLMs are no longer just text generators—they are becoming powerful orchestrators and interpreters, capable of transforming workflows across industries. This blog showcases how to unlock their potential. 📖 Check out the full blog on [https://lnkd.in/gCazUErb] 🌐 Let’s connect, discuss, and innovate together! 🔗 If this resonates with you, feel free to share, comment, or DM. I’d love to hear your thoughts and feedback.
To view or add a comment, sign in
-
The title should have been: "In AI systems, smaller is almost always better". Good to see this article on small language models at the WSJ, which is the optimal method for internal chatbots run on enterprise data. Unfortunately, it still misses the bigger issue that language models have limited use, and doesn't mention the efficiency, accuracy and productivity in providing relevant data to begin with -- tailored to each entity. Even if limiting reporting to language models, which shouldn't be done when attempting to cover all of AI systems, please go beyond LLM firms and big techs as they have natural conflicts -- they are scale dependent. Mentioning big tech and LLM firms is like citing fast food giants for stories on good nutrition. Yes, one can find an occasional story, but that's not where most of the value is. It gives readers the wrong impression. There is an entire health food industry out there. The same is true for responsible AI. That said, it's an improvement over the LLM hype-storm. ~~~~~ “It shouldn’t take quadrillions of operations to compute 2 + 2,” said Illia Polosukhin. “If you’re doing hundreds of thousands or millions of answers, the economics don’t work” to use a large model, Shoham said. “You end up overpaying and have latency issues” with large models, Shih said. “It’s overkill.”
To view or add a comment, sign in
-
Deploying generative AI at scale is rarely as simple as a single prompt and a single response. Under the hood, your application might be making multiple chained API calls—intent detection, toxicity filtering, retrieval, re-reranking, multi-candidate generation, verification, and summarization—all of which multiply token usage and push against quotas and rate limits. Hit those limits, and you could face cascading failures, unpredictable performance, and hidden bottlenecks. It's important to consider how token usage accumulates, explore options like #ProvisionedThroughput or #DynamicQuotaSharing, consider the trade-offs of self-hosting with GPU-hour models, and discover how smarter token management (re-rankers, careful prompts, caching) can help you do more with what you already have. If you’re building or scaling a #GenAI application, I'd love to hear your feedback on what scaling strategies have worked for you and your team! https://lnkd.in/eRAH57qz
To view or add a comment, sign in