Managing Ethical Considerations for Ai in the Boardroom

by | Mar 11 2024

Like anything new, emerging technologies that are as empowering as they are disruptive will introduce new ethical issues for consideration, discussion and mitigation in the boardroom. AI becoming mainstream is the latest innovation that has rightly captured focus for businesses worldwide. While it will transform our society and it brings with it myriad benefits, ethical decision making regarding the development of AI is critical to protect society from its negative implications.

At a time when shareholders are already concerned with boardroom practices, new technology will only intensify the ethical issues boards face. Developing and communicating a robust ethical framework signals to shareholders the board’s commitment to ethical technology adoption.

Accessing the right resources

Notably, the Australian Government has introduced its eight AI Ethics Principles which are designed to ensure AI is safe, secure and reliable across the AI lifecycle: data and modelling, development and deployment, monitoring and refinement. It’s obviously worth non-executive directors being informed of these principles, together with the guidelines also developed by industry bodies, organisations and non-profits. These, and examples of how other companies are managing AI with an ethics lens, are highly valuable.

AI isn’t easily transparent and is known to have biases, which can complicate data-gathering and decision-making for boards that are striving to offer their shareholders greater transparency. Its roll-out is potentially going to be the fastest technology revolution the world has seen, so keeping pace will represent an additional layer of complexity for Boards, in an ever changing boardroom and regulatory environment.

How to maintain ethics when it comes to AI: 

Here are eight steps Boards can take to ensure ethical decision-making on AI:

  1. Transparency and accountability: Boards should prioritise transparency and accountability when adopting and utilising AI technology. This includes fostering a culture of transparency in how AI systems are used, the data they rely on, and the decision-making processes involved.
  2. Bias Identification and mitigation: Proactively identifying and mitigating biases in software solutions, including AI, is essential for ethical decision-making. Boards should collaborate with experts to assess the potential biases present in AI systems and develop strategies to address them. This may involve regular audits of AI algorithms, as well as the implementation of safeguards to minimise any bias impact on decision-making.
  3. Brand integrity and reputation: AI’s ability to fabricate convincing stories and information can have serious repercussions on public opinion, political discourse, and the reputation of businesses. Board directors should be concerned about the potential damage to their company’s brand and integrity, as well as employee impacts, if, for instance, Chat GPT is used to spread false or harmful information.
  4. Ethical framework development: Boards need to establish clear ethical frameworks and guidelines for the responsible use of technology. These frameworks should align with the organisation’s values and prioritise integrity of data as well as incorporate good practices around data gathering, decision-making, and stakeholder impact.
  5. Board education and expertise: Encouraging continuous education and advancing expertise on AI ethics among board members is crucial. AI isn’t just a new agenda item that Boards need to understand as its pace of change will out run any innovation we’ve ever seen, meaning that Boards and Directors must make their professional education a continuous priority.
  6. Ethical impact assessments: Boards can implement formal processes for conducting ethical impact assessments of new technology, encompassing considerations such as data privacy, fairness, accountability, and potential stakeholder impacts. Systematically evaluating the ethical implications of technology adoption enables board to proactively address concerns and make well-informed decisions.
  7. Stakeholder engagement: Engaging with executive and tech teams, shareholders and external stakeholders about the ethical aspects of technology adoption is essential. Boards can demonstrate their commitment to ethical decision-making by soliciting feedback, addressing concerns, and engaging in open dialogue.
  8. Collaboration with ethical AI experts: Boards can seek the guidance of AI and technology ethicists. Collaborating with, and learning from, professionals who have expertise in identifying, understanding, and addressing ethical considerations in AI can provide valuable insights and support. This should be a regular priority too.

AI and ethical governance

By proactively addressing the ethical concerns arising from AI, boards can demonstrate their commitment to ethical governance practices, transparency, and responsible decision-making in an increasingly AI-driven world.

Many directors I speak to, and work alongside, express a commitment to prioritising the ethics of AI and a focus on how their Boards can steer their companies to make, not only ethical decisions when it comes to AI, but all technology decisions and implementation.

We have the opportunity now to upskill and open our minds to this enabling technology, while at the same time ensuring we remain open to learning and engaging in robust conversation and rigorously considered decision making to ensure AI in all its forms is kept on a journey that is for the betterment of business, all people and society overall.

Cheryl Hayman

Non executive director, Ai Media Technologies Ltd, Silk Logistics Holdings, Beston Global Food Ltd, Guide Dogs NSW/ACT.