In recent years, generative AI has made remarkable progress, thanks to the development of new techniques, models, and tools. Here are some of the latest breakthroughs in generative AI that have captured the attention of researchers, developers, and users.
Microsoft has announced a variety of new Azure machine learning features, including large language model (LLM) workflow creation, a OneLake datastore, and an extended model catalogue. The model catalogue is expanding to incorporate models accessible on Hugging Face, such as Stability AI's Stable Diffusion models. Microsoft has included Meta Platforms Inc.'s Llama2 and CodeLlama models in its catalogue, as well as models from Mistral and Cohere that will be offered through its new Models as a Service offering. Microsoft's generative AI product has previously relied heavily on OpenAI models, so expanding the available models is a significant improvement. Furthermore, the move influences the trajectory of Azure AI Studio, a generative AI development centre designed to support a variety of prebuilt and customisable AI models.
Azure AI Studio is positioned as a comprehensive platform for developing generative AI capabilities. The platform allows developers to create, test, and deploy prebuilt and customised AI models using tools that support fine-tuning, evaluation, multimodal capabilities, and quick flow orchestration. The company also highlighted its expansion into the silicon arena with Microsoft Azure Maia and Microsoft Azure Cobalt, as well as upgrades to Microsoft Copilot. These included integrating Security Copilot across the Microsoft Security Portfolio, as well as a studio development environment where organisations could create their own Copilot apps.
GitHub Universe 2023 also included several announcements on generative AI, along with the comment, "Just as GitHub was founded on Git, today we are re-founded on Copilot." GitHub Copilot's general release date was set for December, with the announcement that GitHub Copilot Chat will be incorporated directly into github.com. Copilot Enterprise, which provides fine-tuned models, code-review capabilities, and document search, will be ready in February 2024.
OpenAI made several announcements at their DevDay. The company's GPT-4 Turbo model, which costs substantially less to run than GPT-4, received the most attention, as did a new Assistants API aimed to assist developers who want to add AI assistants to their own apps. Customisation was a key subject for OpenAI, as the business profiled customisable "GPTs" and launched code-free technology to construct them. Users will be able to make these versions available via the GPT Store.
In line with this customisation effort, the business is launching an access programme for GPT-4 fine-tuning. Organisations can apply OpenAI's bespoke Models programme to create a bespoke GPT-4 tailored to a certain domain. Fine-tuning had previously been limited to GPT 3.5, limiting the customizability of OpenAI models.
With the assertion that "ChatGPT can see, hear, and speak," OpenAI also released vision capabilities for GPT-4 Turbo and a new text-to-speech model. In October, it released DALL·E 3 for ChatGPT Plus and Enterprise. At DevDay, OpenAI announced that DALL·E 3 will be available through its Images API. At Ignite, Microsoft announced the availability of DALL·E 3 under Azure OpenAI Service. These upgrades have been largely overshadowed by personnel issues.
International Business Machines Corp. announced watsonx.governance, which allows organisations to create, train, deploy, and now control generative and classical AI models. Large language models and other types of foundation models provide unique security and governance difficulties, and IBM's watsonx.governance, which will be broadly released in December, tries to address some of them. These include the ability to identify when quality metric and drift criteria are surpassed for an LLM's inputs and outputs, as well as the identification of poisonous language and personally identifiable information.
Watsonx.governance also collects information about the models during the development process, allowing organisations to manage risk based on their tolerance thresholds for bias and model drift. It can help ensure compliance with early AI laws, such as the planned EU AI Act. The software now only works on models housed in IBM Cloud, but future plans include connectors with hyperscale cloud suppliers, the ability to administer and monitor third-party models, and an on-premises version in the first quarter of 2024.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.