North America leads the way in the technological revolution with the power of generative artificial intelligence. The innovation ranges from changing the infrastructure of urban life to governance, generative AI is using innovative tools for smart city initiatives.
The North American cities are making significant initiatives and investments in regulatory policies and frameworks for local governance. Here, we will explore the generative AI initiatives by North American cities:
In April 2023, the Mexican Senate established the National Alliance for Artificial Intelligence (Alianza Nacional de Inteligencia Artificial, or ANIA). The primary objective of ANIA is to acknowledge and fortify Mexico's AI ecosystem by actively seeking to establish a legal framework. This involves creating and implementing regulatory structures tailored to the field of artificial intelligence.
In June 2023, U.S. Secretary of Commerce Gina Raimondo revealed the initiation of a new public working group on Artificial Intelligence (AI) by the National Institute of Standards and Technology (NIST).
In this regard, this Public Working Group on Generative AI, which will be based on the NIST AI Risk Management Framework, will work on the opportunities and risks associated with content-generation AI.
Concentrating on areas such as code, text, images, videos, and music, the project seeks to provide crucial advice for organizations dealing with risks in generative AI technologies.
This action is following Biden’s conversation with AI specialists proving that the government is concerned not only with the benefits of AI, but with the dangers as well.
In August 2023, the Department of Defense (DoD) announced the formation of a task force focused on generative artificial intelligence (AI), emphasizing the DoD's dedication to the responsible and strategic use of AI. This action therefore portrays the department’s commitment in exploring the use of AI to achieve its various objectives.
On September 27, 2023, the Canadian government acknowledged the swift progress of AI technology and its significant influence on the nation's economy. To address this, they introduced Canada's Voluntary Code of Conduct for the Ethical Development and Management of High-Level Generative AI Systems.
So far, fourteen entities have endorsed this code, showing their dedication to ethical AI research and application.
In October 2023, the U.S. National Science Foundation announced a US$10.9 million grant to support research that aligns with the advancement of artificial intelligence while prioritizing user safety.
The Safe Learning-Enabled Systems program, a joint effort with the NSF, Open Philanthropy, and Good Ventures, focuses on foundational research to develop and implement secure systems for learning. This includes both autonomous and generative AI technologies, highlighting the importance of safety and reliability in their creation and use.
In November 2023, the City of Seattle introduced its Generative Artificial Intelligence (AI) Policy, in line with President Biden's recent Executive Order on AI. The policy seeks to strike a balance between innovation and strong protections, ensuring the responsible and accountable use of AI.
Seattle's initiative positions it as a leader in innovation and technology in the public sector, with Deputy Mayor Greg Wong endorsing the policy during a visit to Washington D.C.
During the 2022-23 financial year, Canada received a significant investment of US$2.57 billion for research and development in Artificial Intelligence (AI).
This financial pledge not only established Canada as a leader in the worldwide AI arena but also exceeded the funding provided by other countries, such as Germany and Japan, in AI research, in addition to surpassing Australia and France in funding for AI development.
The ability of AI to create content that blurs the line between truth and fiction is worrisome. From fake news articles to altered videos, these creations can mislead the public, spread propaganda, and harm both people and organizations.
A damaged reputation is a steep price to pay for any business involved, directly or indirectly, in spreading false information.
AI that unintentionally spreads or even amplifies societal prejudices can provoke public anger, legal issues, and harm to a brand's image. For example, facial recognition technology, when biased, might incorrectly identify people, leading to possible legal disputes or damage to public relations.
Focusing on diverse training data and committing to regular checks for hidden biases are crucial steps. Organizations like OpenAI stress the need for varied training data.
Due to its potentiality of creating contents similar to copyrighted works, AI has several legal implications.
That is the reason why legal consequences and negative impact on a brand image occur due to the violation of copyrights. In the case of , for example, if a generative AI composes a piece that sounds extremely similar to a specific artist’s copyrighted work, it would result in costly lawsuits and a dismissive attitude by the public.
One of the issues associated with AI models is privacy-related; this is due to the training of the AI models with personal information. This information can be used without permission, or synthetic profiles with very high accuracy can be generated as a significant threat.
A violation of user privacy or misuse of data can lead to legal actions and a loss of user trust. Imagine an AI trained on personal medical records accidentally creating a profile that, though synthetic, closely resembles a real patient, raising privacy issues and potential violations of the Health Insurance Portability and Accountability Act (HIPAA).
The complex process of creating and using generative AI complicates assigning responsibility.
In the case of an error, an unclear responsibility structure can lead to accusations, legal issues, and a decrease in brand reliability. Think about the recent incidents with AI chatbots spreading hate speech or inappropriate content. Without clear accountability, the blame game becomes more intense, resulting in harm to the brand.
Which country invests most in AI?
The United States leads the world in artificial intelligence (AI) investment, consistently outpacing other countries in both public and private funding. The U.S. government has allocated substantial resources towards AI research and development, emphasizing its strategic importance for national security and economic competitiveness.
Additionally, American tech giants like Google, Microsoft, Amazon, and IBM are at the forefront of AI innovation, driving significant private sector investment. Silicon Valley, in particular, serves as a global hub for AI startups and research institutions, fostering an ecosystem that attracts top talent and venture capital.
The U.S. also benefits from a robust academic infrastructure, with leading universities conducting cutting-edge AI research.
What is one of the key challenges faced by GenAI?
One of the key challenges faced by Generative AI (GenAI) is ensuring ethical and responsible use. As these technologies become more sophisticated in generating text, images, and even videos, the potential for misuse increases. Ensuring that AI-generated content is used in ways that respect privacy, intellectual property rights, and cultural sensitivities is crucial.
Another challenge is the potential for biases to be perpetuated or amplified by AI models. Since these models learn from vast amounts of data, they can inadvertently learn and reproduce biases present in that data, leading to discriminatory or unfair outcomes.
Addressing bias requires careful curation of training data, thoughtful algorithm design, and ongoing monitoring and evaluation of AI systems.
Furthermore, there are concerns about the transparency and interpretability of AI-generated content. Understanding how and why AI systems produce certain outputs is essential for trust and accountability.
Research into explainable AI aims to address this challenge, ensuring that users can understand and verify the decisions made by AI models. Overall, navigating these ethical, bias-related, and interpretability challenges is essential for the responsible development and deployment of Generative AI technologies.
Is ChatGPT generative AI?
ChatGPT is indeed a generative AI model, designed to generate human-like text based on the input it receives. Generative AI refers to systems that produce new content rather than simply regurgitating predefined responses.
In the case of ChatGPT, it uses a deep learning model called the Transformer architecture to understand and generate text.
This model has been trained on vast amounts of text data, enabling it to mimic human conversation patterns, understand context, and produce coherent responses.
Generative AI like ChatGPT is capable of producing diverse outputs, ranging from answering questions and providing explanations to generating creative writing or even engaging in casual conversation.
Which is the best generative AI tool?
Determining the "best" generative AI tool depends on specific needs and contexts. Several leading tools stand out for their capabilities and versatility. OpenAI's GPT models, including ChatGPT, are widely recognized for their natural language understanding and generation capabilities, making them popular choices for various applications from chatbots to content creation.
Google's BERT (Bidirectional Encoder Representations from Transformers) excels in understanding the nuances of language and has been integrated into numerous Google services. Additionally, tools like Grok, based on GPT-3, offer enhanced customization and control over generated outputs, appealing to developers and businesses needing tailored solutions.
Which technique is commonly used in generative AI?
Generative AI commonly employs techniques like the Transformer architecture, recurrent neural networks (RNNs), and variants such as LSTM (Long Short-Term Memory) networks. These models enable the generation of coherent and contextually appropriate text based on learned patterns from vast amounts of training data.