Google introduced new Artificial Intelligence (AI) tech and collaborations on August 29th. This current development aims at making AI more accessible to large enterprises. The announcement emerged from the San Francisco’s Google’s Google Next conference. The event had new clients for its cloud software, including notable names like General Motors and Estee Lauder Companies.
Also, Google disclosed a new iteration of its proprietary AI chips. Furthermore, Google announced new security features and enhancements for the Office suite. This flurry of unveilings aligns with Google’s recent endeavors to highlight its AI strategies, prompted by Microsoft’s ambitious AI roadmap.
The company incorporated 20 new AI models into its existing lineup to enhance Google’s cloud service for enterprises. This expands the total count to 100 models. These AI capabilities stem from partnerships that let Google Cloud clients use Meta’s LLaMa 2 and Anthropic’s Claude 2.
Furthermore, Google introduced upgraded iterations of its foundational AI infrastructure. This move not only enhances performance but also introduces new features. For instance, the updated version of PaLM, Google’s text model, now allows users to input larger amounts of text. This facilitates the processing of lengthier documents like books and legal briefs.
SynthID
Google introduced a new tool called SynthID. SynthID enables the addition of watermarks to AI-generated pictures. This technology modifies the image file, which makes it invisible to the human eye. It remains intact even after alterations or tampering with the image.
Customized AI Chip
Google has introduced entry to a specialized version tailored for generative AI (genAI) and expansive language models. The chip google developed , referred to as TPU v5e, to be purpose-built for training substantial models and effectively delivering content derived from these models. Google has seamlessly integrated TPU v5e chips into clusters of 256, which it aptly terms a “supercomputer.” Cloud customers can collaboratively address more intricate computational challenges by interconnecting multiple pods.
The featured image is from akellicious.com