The form has been successfully submitted.
Please find further information in your mailbox.
Using the existing large language models (LLM), we have developed an analytical platform similar to ChatGPT that can analyze the company’s internal data and generate responses to questions based on that information.
Our client, an emerging startup, had a vision for a product designed for sale to their major clients in the retail sector.
Detailed information about the client cannot be disclosed under the terms of the NDA.
Primary pain point: Internal documents, including employee records, marketing data, and sales information lack accessibility. With thousands of files in formats like PDF, CSV, Parquet, TXT, and DOCX, locating and analyzing specific information is time-consuming and error-prone.
Secondary challenges: As a company grows, the volume of documents and information increases, further intensifying the challenges of data accessibility and analysis. Without a proper document analytics system, these issues become increasingly evident over time.
Recognizing these challenges, our client contacted Innowise to get a chatbot for data analytics, with the goal of offering it to their major clients.
Innowise has developed the chatbot data analytics software using the existing large language models. The chat system functions similarly to available bots but is tailored to handle internal data. The development involved building a complete system for integrating LLM with the relational and document databases, including internal client data storage solutions and providing smooth interaction between the platform and users.
The document analysis and processing capabilities enable extracting relevant information from internal company documents such as policies, instructions, guides, operational data, and technical specifications. This allows the user to quickly obtain accurate and up-to-date answers to their questions without having to manually search and analyze data.
By implementing caching, query optimization, and parallel processing, we significantly improved the speed and efficiency of user interactions with the chatbot. Users can receive responses more swiftly, thanks to the frequently requested information stored in the cache. Additionally, we use parallel processing to distribute the workload, enabling the system to handle multiple requests at once. This makes the chatbot more responsive, even during peak times.
We have created a data repository for processing structured relational data. This chatbot feature includes requests to retrieve information from the Data Mart. By providing direct access to the Data Mart through the chatbot, users can effortlessly obtain the information they need without consulting other sources. This simplified access means that decision-makers have up-to-date insights at their fingertips, facilitating agile responses to market changes and strategic opportunities.
We refined document management and retrieval by integrating Azure Data Lake Gen 2 for document ingestion, segmenting documents into chunks, and utilizing Azure OpenAI to generate embeddings. These embeddings are stored in Azure AI Search for efficient analysis and retrieval. User queries are processed through Azure OpenAI Search, comparing query embeddings with stored document embeddings to deliver relevant responses instantly.
The information is presented in the form of charts created with Plotly, tables styled with Material UI, and straightforward text content. This mix makes the content more engaging and helps communicate the details in a way that’s easy to understand and act on.
Our team integrated voice query functionality alongside text-based interactions in the chatbot for data analytics. Users can now effortlessly interact with the bot via voice commands, with the added capability of translating spoken text into written form.
Frontend
Axios, Material UI, Plotly, React, React Context, react-markdown, TypeScript
Backend
Azure AI Search, Azure App Service, Azure Data Factory, Azure Data Lake Gen2, Azure Databricks, Azure Functions, Azure OpenAI, Bicep, Cosmos DB, Spark
Libraries
Axios, Material UI, Matplotlib, NumPy, Pandas, Plotly, PySpark, React Context, react-markdown, Streamlit, TypeScript
Firstly, we conducted a detailed analysis of the business requirements and mapped out a comprehensive plan for the software based on that.
Next, we created a visual representation of the chatbot, which included wireframes, prototypes, and mockups, based on the information we gathered. The design phase focused on creating a user-friendly interface that would deliver customers easy navigation and access to the chat bot’s features.
The development covered creating a full-scale system to integrate LLM with both relational and document databases, including internal client data storage solutions. We provided smooth interaction between the platform and users by employing natural language processing (NLP) to immediately extract key information and integrating retrieval-augmented generation (RAG) AI for contextually relevant responses.
We optimized performance through caching, improved query efficiency, and parallel processing, while providing direct access to structured data from the Data Mart.
Finally, we incorporated voice query and text-to-speech features to elevate accessibility and meet diverse user needs.
1
Front-End Developer
1
Back-End Developer
1
Data Scientist
1
Data Engineer
1
Data Engineer / DevOps
Our team has developed a tailored analytics platform, which our clients then personally evaluated through hands-on testing. This has resulted in several noticeable outcomes:
This advanced chatbot platform delivers exceptional performance and elevates user experience by swiftly extracting key information from internal documents using NLP. Integrated with RAG AI for contextually relevant answers, it optimizes response time through caching, query efficiency, and parallel processing while providing direct access to structured data from the Data Mart. Voice query and text-to-speech capabilities elevate accessibility, catering to diverse user needs.
Our client started offering the solution to their customers, and it quickly gained traction with impressive sales figures. The solution’s effectiveness and ease of use have led to high satisfaction rates among their clients, further solidifying its success in the market.
67%
faster query and data processing
34%
increase in teams’ performance
Having received and processed your request, we will get back to you shortly to detail your project needs and sign an NDA to ensure the confidentiality of information.
After examining requirements, our analysts and developers devise a project proposal with the scope of works, team size, time, and cost estimates.
We arrange a meeting with you to discuss the offer and come to an agreement.
We sign a contract and start working on your project as quickly as possible.
By signing up you agree to our Terms of Use and Privacy Policy, including the use of cookies and transfer of your personal information.
© 2007-2024 Innowise. All Rights Reserved.
Privacy Policy. Cookies Policy.
Innowise Sp. z o.o Ul. Rondo Ignacego Daszyńskiego, 2B-22P, 00-843 Warsaw, Poland
Thank you!
Your message has been sent.
We’ll process your request and contact you back as soon as possible.
Thank you!
Your message has been sent.
We’ll process your request and contact you back as soon as possible.