fi en

Changing Requirements of Tech Professionals: Quality, Sustainability and Ethics of AI

30 Oct 2025 by Hanna-Mari Ilola

Introduction

How tech professionals should prepare for the changing business and how they should adapt and develop their skills in the AI driven industry?

You might have already used AI tools in your day-to-day tasks like GitHub Copilot and other AI tools like ChatGPT and Gemini. They are useful for example in automating repetitive programming tasks and improving code quality by automating bug detection and testing and thus enabling faster software development.

The software professionals’ role is moving more to looking at the big picture and doing strategic thinking and orchestrating AI technologies whereas AI is able to do a lot of the manual work. However, while AI is able to automate many programming tasks, software development still will remain (at least for a long time) human centric when software engineers in addition to the mentioned above need to supervise the quality, security and ethics of the software. What kind of technical skills and soft skills tech professionals need when working together with AI?

In this blog post we’re going to focus on maintaining quality, ethics and data security in AI-driven development.

 

AI Quality and Ethical Risks

 

1. Inaccuracy and Bias

There’s several risks that software developers need to tackle when working with AI models in development projects:

  • AI generated content can be misleading and might be even offering inaccurate content claiming it’s factual.
  • There’s evidence that some AI models have been holding some biases which can have severe consequences like promoting inequality without people noticing or addressing it.

Causes of Biases and Inaccuracies

Causes of these reported biases and inaccuracies are the used training data when training genAI models. GenAI tools use public content from the internet for training purposes, made by humans. This might lead to biased and inaccurate content that genAI tools adopt. The problem here is that genAI models aren’t meant to verify if the content is true or false which might lead to genAI tools to reproduce inaccurate or biased data.

2. Data Privacy Risk

AI models, especially the free ones, might use user data for training purposes which might raise concerns about data privacy which require several steps and processes for the development team to tackle when using AI in software development.

3. Environmental and Sustainability Risk

Software development using AI tools might not be sustainable because the resources needed to develop AI models could raise electricity demand, carbon emissions and water consumption.

  • The training of the AI model requires a lot of resources. However, once the model is trained, the energy demands don’t end there.
  • Processing every user demand requires energy and researchers have estimated that every AI demand requires five times more electricity than a web search.
  • AI usage is continuously increasing so how to ensure sustainable development?

 

Actions for Tech Professionals

 

1. Regulation and Governance

The European Union has drafted a legislation focused on the safety and fundamental rights in terms of AI usage, “the AI Act” which is needed for every company to take seriously into account by the law when doing AI assisted software development. Even though usually company legal departments or legal consultants are there to help companies in these actions, it’s beneficial for every software engineer to study the AI Act and therefore help the engineering team to take the mandatory steps into account when planning the software development process.

It’s important for companies to use and follow an AI governance framework which addresses the ethics, sustainability and data privacy and security when using AI in software development to tackle all these risks.

2. Preventing Bias and Ensuring Quality

Preventing biases when using AI in software development is a big part of an AI engineer’s and data professional’s work.

  • Fairness Algorithms: Designing fairness-aware algorithms is something that especially data scientists are in charge of and adding the algorithms to the AI training processes. This means methods in processing the data before using it with an AI tool and training the AI tool with it and also post-processing the data, in other words editing the AI outputs, to ensure equality and prevent biases.
  • Human Auditing: Human eyes are also required in auditing AI generated contents. Including other professionals, for example legal, data or HR departments into this process is a good way to ensure that the work is in line with good practice as well.

Critical Thinking and Verification

Unlike AI models, humans are able to do critical thinking and evaluate if the AI provided content is true or false.

  • Software developers need to use a critical eye when using AI provided content and use their own judgement as well.
  • A good way to do this is to check the information from peer-reviewed publications and researches or consulting colleagues.
  • Software developers always need to double check the accuracy of their AI created content and be mindful of the content accuracy and quality.
  • In addition to checking the content from peer-reviewed trusted sources, software developers can improve the content accuracy even more by using RAG models which search their data only from trusted sources.

3. Process Standardization and Collaboration

  • Software developers can help the AI tool to produce as accurate output as possible by using very well articulated, clear and structured inputs.
  • Also providing feedback to the AI model is a good way to ensure a good performance and resilience over time.
  • It’s important for companies to build preapproved and standardized workflows and libraries to ensure security and quality of the software product when developing with AI tools.
  • Also a transparent work culture within the organization and consulting other professionals with a low threshold (like company’s law, HR, QA and engineering departments) is important and reduces the risk of biased or inaccurate content or lack of data security.

4. Sustainability Actions

While the AI Act’s core focus is on risk classification, it does take sustainability and environmental effects into consideration. There’s also some concrete steps that software developers can take to ensure as sustainable AI development as possible.

  • Before starting the software development project planning the AI workflows for efficiency is key to cut unnecessary steps in the development process.
  • Tracking the usage of AI tools helps to uncover process parts that require lots of energy and helps to track where one can optimize.
  • Testing the AI models and infrastructure to reveal other inefficiencies.

5. Data Security Strategy and Code Integrity

To tackle data security and privacy risks it’s important for software engineers to be mindful of the terms of service when using an AI tool and training colleagues about the topic if needed.

  • An essential addition is to take the following steps to the company strategy and AI governance: encrypting the user data for sensitive information and also anonymizing it when possible.
  • By adding access control one can limit who can interact with the AI system and the data there.
  • Strong authentication like multi-factor authentication and logging every system behaviour and user activities helps detecting potential security risks.
  • Also ensuring the compliance with the data security laws (like GDPR) is mandatory.
  • All of these actions also require regular audits and updates for the processes and used tools.

AI Code Integrity

  • When using an AI generated code a good tip is to do a sandbox testing and stand-alone testing to ensure the security and quality of the code.
  • In general it’s good to do some manual coding as well to maintain your own coding skills. Relying solely on AI generated code is a risk quality- and securitywise.

 

Required Skills for the AI Professional

 

1. Ethics, Bias Mitigation, and Collaboration

  • DEI/Fairness: DEI politics are a solid part of company processes and it’s beneficial for everyone in the company to study DEI frameworks and it plays a role in AI development as well. For software engineers and data professionals knowing the DEI frameworks when adopting fair-aware algorithms into the AI software development project is crucial.
  • Fairness Algorithms: Also learning to build fair-aware algorithms is mandatory for especially data scientists and ML engineers.
  • Cross-Functional Work: Ability to work in cross-functional settings and multiprofessional teams is beneficial in this sense as well.

2. Quality and Verification Skills

Maintaining the quality and accuracy of AI-assisted outputs requires a specific combination of cognitive and technical mastery. Professionals have to combine their critical thinking abilities with good research skills to verify AI generated content. In addition to professional judgment, technical proficiency is needed to ensure the AI process itself produces reliable results, from input structuring to feedback loops. The quality management of the AI generated code demands a comprehensive Quality Assurance (QA) mindset throughout the development lifecycle.

Key Competencies for Quality and Verification:

  • Criticality & Research: This includes skills to use critical thinking and own professional judgement, as well as research skills when double-checking the AI generated outputs for accuracy (checking output information from trusted sources like peer-reviewed research).
  • RAG/Data: Professionals need RAG model implementation skills and knowledge in data provenance to make the verification process more effective.
  • Prompting/Feedback: This covers prompt engineering skills (creating well-structured prompts) and feedback loop management and model refinement skills (co-working effectively with the AI tool).
  • Quality Assurance: A comprehensive quality assurance mindset is necessary, which concretely involves skills in test automation, test designing, as well as bug reporting and root cause analysis.

3. Data Security and Compliance Skills

To ensure the best data security and privacy possible when working with AI tools, software and data professionals must have a comprehensive skill set:

  • Legal Awareness: This begins with legal awareness and due diligence, meaning developers must be mindful of the terms of service of any AI tool and understand the use for user data. This knowledge is good to be shared through training and knowledge transfer within the software development team.
  • Technical Security: On a technical aspect, core security skills are key. Professionals must master data security implementation and privacy by design concept, which includes security principles like access management and security architecture. To protect sensitive information, engineers need skills in security testing and isolation techniques, such as sandbox testing AI-generated code.
  • Programming: Maintaining core programming skills is also essential, as developers can’t rely solely on AI-generated code, which may lead to unnoticed vulnerabilities.
  • Process & Risk: A crucial part of preventing risks is process management. This requires expertise in risk management and ensuring process standardization and workflow automation.
  • Monitoring & Compliance: The development team must prioritize security monitoring and incident detection. Finally, the ability to ensure regulatory compliance in terms of GDPR and data security laws is mandatory, making the ability to understand legal text a non-negotiable skill in the AI-driven development environment.

Software industry insights to your inbox?

Witted Insights is a monthly overview of the Nordic software consulting business. Subscribe so you’ll know how the companies in the industry are doing.

By entering your email address you agree to receive the newsletter on a monthly basis in accordance with Witted’s Privacy Policy →