It was a late evening in the confines of our small office, where the hum of ambition was as palpable as the heat from our overworked machines. Sohel and I were deep into training our latest project—a clause identification model that we hoped would revolutionize how contracts were managed and analyzed. We had only our local laptops, and on that particular night, we pushed my laptop to its limits. The GPU and CPU were firing on all cylinders, the memory was stretched thin, and then it happened: the sound of a helicopter filled the room.
In a bizarre twist of fate, a stray wire inside the laptop had come loose and was now brushing against the CPU fan. The noise was deafening in the small room, making it almost impossible to concentrate. But amid the cacophony, there was a sense of excitement. This machine, straining under the demands we placed on it, was a testament to our determination. We were on the edge of a innovation, and even the mechanical protests of a laptop weren’t going to stop us.
It was March 2019 when Aavenir was born, a startup with a vision to harness the power of AI in revolutionizing contract management. With a modest team of just ten members by September, the ambitions were high but the resources were limited. Among these ten were two dedicated souls in the AI department: myself, Anand Trivedi, and my colleague Sohel, who together faced the daunting task of building advanced AI tools on a shoestring budget.
These early struggles were not merely hurdles but valuable lessons in resilience and innovation. They taught us how to maximize efficiency and performance from minimal resources. The clause identification model, which started as a test of our technical and creative limits, gradually took shape into a robust tool capable of pinpointing and analyzing contractual clauses with surprising accuracy.
In those early days, the pursuit of affordable technology was critical. Sohel and I spent countless hours discussing and researching where to find cheap GPUs. It was 2019, just before the world was hit by the pandemic, and the availability of cost-effective GPUs was already scarce. Our need for powerful computing resources to train sophisticated models was palpable, yet the financial constraints were a constant reminder of our startup reality.
Just as we were starting to make headway with our AI developments, the world was hit by the unprecedented COVID-19 pandemic. For a young startup like Aavenir, the timing could not have been more challenging. The early days of 2020 brought not just a health crisis but a severe disruption to businesses everywhere. We were no exception.
Before the pandemic struck, we had just completed the initial version of our automated invoice scanning system. This was not just another AI tool—it was a groundbreaking solution capable of extracting data from financial documents without any predefined templates. You could throw any document at it, and it would work its magic, a feature that was poised to revolutionize financial processing. However, as markets trembled under the weight of the pandemic, collaboration and ventures stalled, and the promise of quick adoption seemed to fade away.
In the early months of 2021, as the world grappled with uncertainty, Aavenir was on the brink of a technological breakthrough. Sohel and I, determined to keep our momentum, pivoted from the LSTM and attention-based systems we had been laboring over to something more potent—transformer models. This shift was not just an upgrade; it was a revolution in how our systems understood legal and financial documents.
The previous systems had laid a solid foundation, but the transformer models unlocked new realms of possibilities. They offered significantly higher accuracy and efficiency, propelling our AI capabilities far beyond our earlier iterations. It was as if we had been equipped with a new lens through which the intricate patterns and nuances of text in contracts and invoices became startlingly clear.
Adopting transformer technology at this point was a bold move, given its relative novelty in the field. But it was a gamble that paid off. Our systems were not just keeping pace; they were setting the pace—somewhat ahead of their time, perfectly positioned to meet an accelerating demand for smarter, faster AI solutions in a post-pandemic world.
As we fine-tuned these models, our small team felt a surge of pride. Amidst a global crisis, we had managed to not only survive but innovate. It was a testament to our belief that even in the darkest times, a spark of creativity can light the way forward.
It was a chilly November morning in 2022 when the usual calm of Aavenir’s office was disrupted by a buzz that swept through the tech world. I, Anand, and my teammate Sohel were already at our desks, discussing enhancements to our AI that converted natural queries into SQL—a recent success story for our small team.
Suddenly, Sohel's phone beeped with a notification that caught our immediate attention. "Check this out, Anand," he said, showing me an article on his screen about the launch of ChatGPT. It described how this new AI model could converse, answer questions, and even write coherent passages across various domains, mimicking human-like text generation.
We read through the details, impressed and slightly daunted. "This is a game-changer," I commented, feeling the weight of the new competition.
Over the next few days, it became clear that the market's expectations had shifted dramatically. Inquiries started coming in from clients, all asking if we could provide similar capabilities tailored to their specific needs. The release of ChatGPT had not only advanced the field of AI but also altered the landscape of customer demand.
"We need to think bigger," as we brainstormed our approach. "Why not combine the precision of our specialized models with the broad, generative capabilities of ChatGPT?"
I nodded in agreement, invigorated by the challenge. It was time to expand our horizons and explore how we could integrate more generalist AI features into our system. The journey ahead was daunting but filled with potential.
The buzz around ChatGPT's capabilities was undeniable, but it also brought to light a significant concern—data security. Our clients were excited about the possibilities of a generative AI but were equally worried about sharing sensitive data on an open platform like ChatGPT. This gap presented us with a unique opportunity.
One morning, Sohel and I decided to propose a bold move to our leadership team: to fine-tune our own model, designed specifically for our clients' needs, ensuring both high performance and data security. As we laid out our plan in the boardroom, our CEO and CTO exchanged glances, their smiles tinged with skepticism. To them, the idea of competing with an AI as sophisticated as ChatGPT seemed nearly impossible.
Today, the landscape of AI development has significantly transformed, with numerous models and datasets readily available on platforms like Hugging Face, making it easier to fine-tune models to specific needs. However, looking back, the early days of refining large language models (LLMs) felt like navigating in the dark. At that time, there was no established framework or pathway for fine-tuning LLMs, and each step forward was a venture into uncharted territory. This lack of guidance made the initial attempts at model optimization more challenging and uncertain.
Undeterred by their initial reaction, we left the meeting with a clear resolve. We dived into the world of open-source models, leveraging everything from BERT to GPT variants, adapting them to our needs. The task was daunting: we needed to prepare a vast dataset tailored to our custom use cases, ensuring it was both comprehensive and of high quality.
As we progressed, one thing became increasingly clear: a specialized, well-trained model on a quality dataset could indeed outperform a general model on those specific tasks. This realization fueled our efforts. We spent countless hours coding, testing, and refining our approach. Each setback was a lesson, and every breakthrough, a victory.
Gradually, our model began to take shape. It wasn't just about matching the capabilities of general AI systems anymore; it was about surpassing them in areas that mattered most to our clients. As our model's performance improved, those skeptical smiles turned into nods of approval. Our leadership team, once doubtful, now saw the potential of our tailored AI solution.
As we progressed with our specialized AI model, the next monumental task was deployment. Achieving the speed and responsiveness of ChatGPT was essential, but the costs associated with high-performance computing resources were daunting. Our team was determined to find a solution that wouldn't break the bank but would still deliver the blazing fast performance our clients expected.
After extensive research and experimentation, we discovered a promising approach: wrapping our AI model in C++. This technique allowed us to optimize the model's inference time significantly. It was a game-changer. The C++ wrapper not only reduced our operational costs but also increased the model's speed, bringing it closer to the responsiveness of ChatGPT. This technical innovation brought our model into production with impressive performance metrics.
However, the journey didn't end there. As more customers interacted with our AI, feedback started pouring in. While the model performed exceptionally well in many respects, there were inevitable gaps. Customers pointed out specific areas where the AI didn't meet their expectations or where it misunderstood the nuances of their unique use cases.
One significant challenge was the lack of a mechanism to directly incorporate this feedback into the model's learning process. Our AI was adept at handling the tasks it was trained for, but adapting to new inputs or correcting errors based on user feedback wasn't straightforward. This gap highlighted a crucial area for improvement.
As we faced the challenges of enhancing and refining our AI system, we turned to the wealth of knowledge available in the broader AI research community. Delving into research papers from leading organizations like Meta and Microsoft, we gleaned insights that shaped our strategy for creating a self-evolving large language model (LLM). Our goal was to not only address the immediate feedback from users but to establish a system that continually improves and adapts over time.
The core of our new strategy was the implementation of what we called the "Observer Model." This was a fine-tuned component designed to monitor and evaluate the main production model's performance in real-time. The Observer Model's primary function was to assess the responses generated by our AI on several critical parameters, including fairness, ethics, and factual correctness.
The introduction of the Observer Model also transformed how customer feedback was handled. Previously, feedback had to be manually reviewed and incorporated, a process that was both time-consuming and susceptible to delays. With the Observer Model, whenever customers flagged responses as inappropriate or incorrect, the model automatically logged these instances.
This data was not immediately discarded or blindly accepted. Instead, each flagged response was elaborated upon by the Observer Model and then stored in detailed logs for further review. Our team would periodically review these logs, combining human oversight with automated processes to ensure a balanced approach to model training.
To integrate the insights gained from this feedback loop effectively, we employed advanced reinforcement learning techniques, specifically Proximal Policy Optimization (PPO) and Distributional Policy Optimization (DPO). These methods allowed us to fine-tune our AI based on real-world interactions and feedback, adjusting the model's behavior in a controlled and incremental manner.
This approach not only improved the model's performance over time but also ensured that the enhancements were grounded in actual user experiences and needs. By systematically integrating user feedback and employing reinforcement learning, our AI was not just reacting to inputs but evolving from them.
Our AI system at Aavenir is constantly evolving, making notable progress each day. However, it is not without its imperfections. The system still encounters glitches and suffers from various process gaps, highlighting the complexities of such advanced technologies.
Testing our system poses significant challenges due to its complexity. It's crucial to have robust automated testing in place to ensure that new features do not disrupt existing functionalities. This ongoing testing is essential for maintaining the system's reliability and for preventing any regression in performance as we continue to develop and refine our features.
Despite these hurdles, we are committed to improving and refining our AI system. We understand the importance of continuous integration and testing in developing a reliable and efficient AI platform. As we move forward, our focus remains on overcoming these challenges and enhancing our system's capabilities.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.