How boards can approach the evolvement of generative AI like ChatGPT

by | Apr 6 2023

Although Microsoft’s additional investment in OpenAI was the latest artificial intelligence (AI) related headline, other generative AI applications have been capturing people’s attention.

What is different now are the many immediate applications of the technology and the broad implications for boards and investors. We moved rapidly from the seemingly simple first introduction of Open AI’s ChatGPT application, and we’ve since seen a rapid-fire series of new AI products with the potential to affect a significant range of personal and professional business activities, across a multitude of sectors, channels and industries.

Let’s start with a definition, Generative AI is an umbrella term for a form of machine learning (ML) called “deep learning”. This form of AI uses machines trained on sets of data to perform certain tasks and/or make predictions without human direction – and recently, they made a big technological leap in how they learn.

As consistently witnessed with new technology, the applications and uses are introduced first and then regulation, policy and rules catch up. OpenAI is working from the belief that by initially opening ChatGPT for broad public use, it can better help train the model with real-time feedback from users. As an example, think of the feedback they will get with more than 1 million users signing up to use OpenAI’s ChatGPT tool in the first five days after release in November 2022.

One crucial thing for boards to consider is that ChatGPT and all AI-related technologies highlight numerous ethical questions and issues. These include challenges such as copyright, intellectual property protection, and other licensing issues for AI designed work.

Consideration also needs to include a discussion around the implications that ChatGPT can produce incorrect, inconsistent or even inappropriate answers as a result of using the entire Internet as its training ground.

It is likely though, that for companies using the technology on specific, well-defined programs of work that this will be less of an issue. It does require a focus on how a company can further train the machine with more specific and accurate data and thorough finessing as the technology evolves and is further embedded. For companies using significant digital solutions and social media-based companies, these have to be considered as an even more serious issue and significant risk.

To manage the risks, a board’s scrutiny needs to ensure focus on the messages and delivery mechanisms to all stakeholders, especially customers. This remit also must include suppliers and all of the company’s other human interactions. The fallout for incorrect or misleading information, for a breach of privacy, misuse of data or for the machine’s misunderstanding of diverse and nuanced language can all have significant repercussions for the business.

According to Forbes for the board to monitor the organisational use of AI, it has to have some understanding of how it work.

The authors wrote, “These are challenges that simply can’t be passed over to management. The fundamental complexity of generative AI can’t be an excuse for the board to be excessively deferential to management on its organisational use.

The extraordinary opportunities afforded by generative AI don’t allow the board to “go easy” on monitoring its application”.

As Directors, our attention must be balanced with a recognition of the significant learning curve that comes with board oversight of AI applications. Initially, directors will usually assign the details of implementation to management. However, just like any other significant program of work, we remain responsible for monitoring and measuring the success and outcomes of that implementation.

As difficult and complex as the technical principles and jargon may be, as alien as the concept and the believability of its potential may be, boards must increase their direct engagement with the principles of generative AI if we are to fulfill our responsibilities. It requires a certain amount of bravery, of resilience, of contemporary thinking and openness to what is possible, balanced with pragmatic, measurable and business-orientated deliverables.

This enhanced engagement requires not only board education about the technology, but also an awareness of how generative AI may be applied by the company, and all related and relevant social, legal, ethical and governance issues.

Ultimately, the board will need to consider all the trust-related concerns of consumers, employees and investors; the potential impact on board composition and skills (including the possible need for subject matter expertise or advice); the inevitability of government regulation; and the need to address the potential for capability gaps among management.

By Cheryl Hayman

March 2023, Digital Nation