Artificial intelligence (AI) is currently revolutionizing many aspects of society, including healthcare, financial services, energy, transportation, education, security, employment, and legal services. As AI becomes more and more integrated into society, it is crucial to assess the potential effects of this technology on society as a whole.

To get started, it’s important to define AI and understand its capabilities. AI refers to the development of computer systems that can perform tasks that normally require human intelligence, such as learning, problem solving, and decision making. There are several different types of AI, ranging from narrow AI, which is designed to perform a specific task, to general AI, which has the ability to perform any intellectual task that a human can perform.

Artificial Intelligence is one of the transformative technologies of the Fourth Industrial Age. Since 1955, when the term was first coined, AI has evolved into a system that can learn from large amounts of data and make predictions based on its learnings. One application of AI is machine learning, where computer systems are not explicitly programmed for a specific result, but instead learn from examples. In other words, Machine Learning involves training AI models with numerous examples of the correct solution to a specific problem.

An example of a widely used AI application is image generation applications, which can generate new images based on a set of inputs or a specific style, such as Lensa and Dall-E. These types of applications have a variety of practical uses, including generating realistic images for creative work. Another key use case for AI is the development of text-generating applications (such as ChatGPT powered by OpenAI). These text generating applications can be used to improve the accuracy and sophistication of written content. The academic sector, legal services, the creative industry and the software development industry will be significantly affected. For example, an author might use a text-generating app to generate ideas for her next novel, or a musician might use one to generate lyrics for her next song.

AI is impacting society in many ways, including disrupting business models, but the focus of this article will be on the ethical and legal implications. As AI becomes more advanced, it is capable of making decisions that have significant consequences for individuals and society. For example, AI systems are increasingly being used in healthcare to diagnose and treat patients, and in the criminal justice system to predict the likelihood of recidivism. However, there are concerns that these systems may be biased and lead to unfair treatment of certain groups and individuals. Therefore, it is important to consider the ethical and legal implications of AI and ensure that appropriate measures are put in place to avoid bias and ensure fairness. The overarching question is: Can we trust these AIs? The answer to this question is found in the rules and principles that guide the design of AI.

As Big Tech (like Google’s DeepMind) has increased heavy investment in AI, are they being guided by the principles of fostering responsible and beneficial AI? What do the strong ethical principles of AI look like? We can look to the Principles of the Organization for Economic Co-operation and Development (OECD) for guidance. They propose universal ethical principles as follows: AI must be robust, secure and transparent. Microsoft as a corporation says that it is guided by the following principles in the design of its AI: fairness, trustworthiness and security, privacy and security, transparency, responsibility and inclusion. These are good starting points. But much more needs to be done.

Also read: How data analytics and artificial intelligence will help PFAs improve customer service

Governments and companies must work collaboratively to propose and adopt sound AI policies and strategies that are guided by ethical principles. Recognizing that data becomes AI (and biased data becomes biased AI), it is imperative that compliance with strong data governance and security standards become the norm. We have seen early efforts in this direction, such as Norway’s National Artificial Intelligence Strategy and the United Kingdom’s National AI Strategy. As technologists, innovators, policy makers, academics, and civil societies come together in global and national conversations to build an ethical, AI-enabled society, it is incumbent on all of us to be guided by the principles of responsible use of AI to accelerate innovation.

As AI becomes more advanced and capable of making decisions that have significant consequences for individuals and society, the main question is: Can we trust AI?

While AI will undoubtedly be instrumental in solving global problems like poverty, access to affordable healthcare, education, transportation, access to capital, access to justice, and clean energy, we need to keep in mind the dark side of AI. We have seen the potential for some image generation/editing applications to abuse the privacy of user data and sexualize minors. Not to mention the other AI applications related to the inappropriate use of AI in law enforcement and warfare or by dark actors and criminals. It is a collective responsibility of society to advocate for responsible AI: artificial intelligence that is not biased but is fair, explainable, and ultimately subject to human control and review.

Suffice it to add that while it is natural to have concerns about the potential impact of new technologies such as AI, it is important to approach these issues with a measured and informed perspective. Fears about demonic manipulation or the “Antichrist” are unfounded and should not be a concern for those interested in the responsible and ethical use of AI. It is important to approach AI concerns with a rational, evidence-based understanding of its capabilities and limitations. AI is simply a tool that is designed to perform tasks based on a predetermined set of rules or algorithms. By carefully considering the potential risks and benefits of AI and working to ensure it is developed and used ethically and responsibly, we can harness its power to improve our lives and solve some of the most pressing challenges facing humanity.

Currently, there is no specific legal framework to regulate AI at the international level. However, there are a number of existing laws and regulations that apply to the use of AI, such as data protection laws and a number of initiatives that are working to develop guidelines and best practices for the ethical development and use of AI. (The World Economic Forum is working on a Global Ethical Framework for AI.) The Nigerian National Information Technology Development Agency (NITDA) is currently taking steps to develop a National Artificial Intelligence Policy like many other countries. These interventions are expected to address the design of AI policies and legal frameworks in a way that promotes and encourages ethical and responsible use of data, as well as seeking data use protections.

AI is still in its early stages of development, and as it continues to develop and becomes more prevalent, it will be important to stay informed and participate in ongoing discussions about the impact of AI on society. In the meantime, as we continue to discover the capabilities of AI, we must remain equally focused on the phenomenal capacity of the human mind. While AI will significantly improve efficiency, accuracy, and productivity across a spectrum of human endeavors, the human mind reserves the unique ability to figure out which problems to address and which opportunities to seize. This means that entrepreneurs, innovators, scientists, creators and all kinds of thinkers will continue to be essential. AI will always admire human ingenuity for leadership. The future belongs to those who are able to use AI in collaboration with human intelligence.

Rotimi Ogunyemi is a Technology Lawyer writing from Lagos