What is ethical AI?

AdobeStock_ethical AI_739390615.jpeg

Ethical artificial intelligence (AI) focuses on developing and deploying AI in ways that align with societal values, respect human rights and minimise harm. Achieving this balance is not without its challenges. AI systems are supposed to interpret and act on data to discriminate between desirable and undesirable outcomes. However, they are trained on past decisions that may be affected by unconscious bias and human decisions, commercial organisations, governments and not-for-profits looking to adopt AI for its potential to supercharge their operations will need to understand and manage the associated risks of unintended improper or unethical decisions, which can be significant.

What’s at stake in AI ethics

Bias in AI systems

AI models learn from data, and if that data is incomplete or biased, the system's predictions and decisions may unwittingly reinforce discriminatory behaviours. For example, using AI to manage job applications could favour specific demographic groups when screening CVs if it learns from data that shows a pattern of human bias towards or against a certain age, race or gender. This may result in legal repercussions, reputational damage and a less skilled workforce.

AI, while rooted in mathematics, is ultimately shaped by the data and decisions of humans. This can lead to revealing hidden biases that produce outcomes we may find unacceptable when exposed.
Sarah Dods | Southern Hemisphere Region Leader for Advanced Analytics and Artificial Intelligence, GHD

Privacy and data security

The collection and use of personal data by AI for decision-making demands strict safeguards to prevent misuse or breaches. Unregulated gathering of sensitive data can lead to significant ethical and legal violations. Additionally, the data used to train models like GPT can sometimes appear in their predictions, raising concerns about privacy and the potential for unintended data exposure.

Decision-making transparency

With many AI systems treated as "black boxes," users often lack visibility into how important decisions, such as loan approvals or hiring recommendations, are reached. It is unfair to be on the end of a black box decision that is not in your favour, without the ability to challenge it. Without explainability of what factors are supporting a decision, organisations could find themselves discriminating illegally, and trust in those AI systems is eroded for all who use them or are affected by the decisions.

Automated flaws with dire consequences

Certain AI-driven tasks possess life-altering implications. For instance, biases in computer vision systems for self-driving cars can have dangerous oversights, such as failing to identify pedestrians in the way.

Sustainability and energy efficiency

On a larger scale, we are hosted by a planet with finite resources. Running AI is an energy-hungry process, which makes the environmental impact of developing and running AI models another ethical concern.

Electricity demand from data centres worldwide is expected to double by 2030 to around 945 TWh, eclipsing the total electricity consumption of Japan. AI will be the primary factor in this surge, as AI-optimised data centres are estimated to quadruple their electricity demand by 2030.

By 2027, global AI demand is projected to consume 4.2 to 6.6 billion cubic meters of water, which would be more than the yearly water consumption of Denmark.

Google’s greenhouse gas emissions grew by 48 percent since 2019, citing data centre energy consumption and supply chain emissions for greater AI integration as the primary factors, while Microsoft’s emissions grew by 29% since 2020, attributing the increase to building more data centres for AI support.

Global efforts to create ethical standards

Recognising the need to address these challenges, governments and global organisations are working hard to build robust ethical frameworks for AI use. Some key efforts include:

The EU Artificial Intelligence Act

This groundbreaking legislation incorporates 25 principles to safeguard human rights and regulate high-risk AI applications in Europe.

Australia's AI safety guardrails

Australia recently launched guidelines for AI systems, aligned with Australia's AI Ethics Principles, emphasising transparency, accountability and public trust.

International standards organisations

Institutions like ISO and IEEE play a pivotal role in shaping AI standards by outlining best practices for risk management, bias mitigation and system transparency.

ISO/IEC 42001, the first global standard for AI Management Systems (AIMS), outlines the requirements for implementing, maintaining and improving responsible AI use, focusing on ethics, transparency and continuous learning to manage risks and opportunities when developing and using AI.

Organisations can consult ISO/IEC TR 24368 to learn how to evaluate AI systems ethically, providing a framework to align AI development with global standards for fairness and accountability.

While these frameworks are steps in the right direction, they represent ongoing work. Ethical AI will evolve as the technology advances, demanding a global collaborative effort that harmonises innovation with responsibility.

Why ethical AI matters for businesses

AI has grown beyond experimental use by tech giants and is now quickly being integrated by leading enterprises across industries. Companies that adopt and deploy AI responsibly gain a strategic edge while building public trust.

For GHD’s clients, ethical AI is especially relevant as adoption expands into sectors like engineering, construction, infrastructure, manufacturing and resource management. Here's why businesses must prioritise ethical AI practices:

Building resilience in the face of rapid evolution

AI is evolving at an extraordinary pace. Failure to adopt it can leave organisations scrambling behind industry leaders. However, deploying AI without understanding the risks can lead to significant legal challenges and reputational damage. Companies need strong governance structures to oversee AI strategies.

“One of the big risks associated with AI is not getting on the train before it’s moving too fast to hop on board… it's unreasonable to expect that every organization is going to be able to pick up AI seamlessly and apply it at its first effort. There is learning that has to take place, and there's change management that has to take place to be able to use it effectively.”

- Sarah Dods, Southern Hemisphere Region Leader for Advanced Analytics and Artificial Intelligence, GHD

Opportunities for innovation and growth

Organisations that treat ethics as an enabler, rather than a restriction, can unlock groundbreaking opportunities. By focusing on equity, accountability and fairness in their systems, businesses can provide solutions that reflect society's long-term interests, enhancing customer satisfaction and loyalty.

AI presents both immense opportunities and significant challenges. By adopting ethical practices, organisations can unlock the potential of this game-changing technology without losing sight of their larger responsibilities.

Transform podcast series cover_icon.png

Want to go deeper?

Explore more about ethical AI and the importance of global standards and governance. Delve into the current ethical landscape, key issues and strategies for ensuring responsible AI integration on Transform – Tomorrow’s thinking, today.
Explore now

Related insights

Related projects

Webinars