LLMs are coming to Apache Solr, Elastic Rerank model, Gemini 2.0 and much more!
Step into the world of Information Retrieval and AI, where our curious minds 👾 are always uncovering the freshest trends and innovations. Explore our insights, enjoy the journey, and hit subscribe to stay ahead of the curve!
📰 News
Large Language Models are coming to Apache Solr! The integration of Large Language Models (LLMs) with Apache Solr marks an exciting new chapter for search capabilities, starting with the upcoming Solr 9.8 release. This milestone introduces support for external text-to-vector inference models directly in the textual query parser, enabling seamless semantic search. Users can now input textual queries, which Solr encodes into vectors for efficient vector-based search behind the scenes.
➡️ Read more
Amazon Kendra GenAI Index is a new index in Amazon Kendra designed for RAG and intelligent search to help enterprises build digital assistants and intelligent search experiences more efficiently and effectively. This index offers high retrieval accuracy, using advanced semantic models and the latest information retrieval technologies.
➡️ Read more
Introducing Pinecone Rerank V0 Pinecone-rerank-v0 — is now available in public preview.
Designed to significantly enhance enterprise search and retrieval augmented generation (RAG) systems, this new model provides a powerful boost in relevance and accuracy, ensuring that your search results and AI-generated content are grounded in the most relevant and contextually precise information.
➡️ Read more
“Take your search experiences up to level 11 with our new state-of-the-art cross-encoder Elastic Rerank model (in Tech Preview). Reranking models provide a semantic boost to any search experience, without requiring you to change the schema of your data, giving you room to explore other relevance tools for semantic relevance on your own time and within your budget.”
➡️ Read more
Recommended by LinkedIn
“Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.”
➡️ Read more
“Today we’re introducing Structured Outputs in the API, a new feature designed to ensure model-generated outputs will exactly match JSON Schemas provided by developers.”
➡️ Read more
🏆 Must-Read Research and Papers:
🗓️ Next events:
In case you missed our talks:
If you know of any AI & Information Retrieval events that are not listed do give us a heads-up so we can add them!
About us
Sease is an Information Retrieval Company based in London, focused on building Search solutions and AI integrations with cutting-edge Machine Learning such as Large Language Models (RAG, Vector-Based search) and Learning To Rank.
I help search teams get that RAG solution from the Lab to Production!
4moDon't forget Haystack is the week of April 21st. CFP coming soon.
Technologist at Sears Holdings India
4moExciting