News
AI Event

Regulation as a Stepping Stone for Innovation: A Q&A with Virginia Dignum

By
No items found.
February 26, 2024
Share this post
Regulation as a Stepping Stone for Innovation: A Q&A with Virginia Dignum

Most will agree Responsible AI (RAI) should emphasize trust, cooperation, and the common good, but taking responsibility is always going to involve some degree of regulation, governance, and awareness. So said Virginia Dignum, professor of responsible artificial intelligence at Umeå University, Sweden, while hosting our recent Distinguished Lecturer seminar.

Dignum, who is also of a member of global AI research and advisory groups including the United Nations Advisory Body on AI, the Global Partnership on AI (GPAI), UNESCO, Dutch AI Alliance, and the WEF Global Future Council on AI, presented an engaging talk titled “Beyond the AI Hype: Balancing Innovation and Social Responsibility.” 

The field of ethics understands that trade-offs are often necessary and decision-making is always contextual. But that doesn’t mean regulation is a blunt instrument. Rather, Dignum argued, RAI is a stepping stone for innovation—not merely an option but the only way forward in AI.

At the close of her talk, Dignum answered some of the numerous attendee questions - and then followed up with some written responses to others. Check out her answers below. Watch the full talk here.

  1. How can we decide what we want AI to be before or while we are doing it, if human curiosity and desire to know for the sake of knowing drives us to develop AI without thinking twice?
    I think it's a bigger question than what I can answer now, but the point is not so much that we need to decide once and for all or we have to decide before we build the systems but this question needs to be continuously in our minds. Of course, we do need to explore and we want to be driven by curiosity and see what is possible there. That is how science and technology evolves. We need to keep that curiosity going, but at the same time we have to be asking ourselves, “What do we want? “What are the consequences? What are the possible uses, the possible effects, the possible impact of the directions that we are taking?” So we do not need to answer the question directly or definitely—we need to do it in parallel—but we cannot forget to ask ourselves this question continuously.
  2. Could you speak a bit more on what the role of government regulation is in making AI responsible?
    I think there are many roles. Regulation or governance is not just about generating legislation or laws for those for AI. Governance is a multifaceted field, in which hard regulation by government takes place and it is necessary, but also we need to take regulation or governance in the way that we the development processes, the design life cycle, methods that we use, the engagement and the principles and the the vision of the organizations involved. Each of us needs to think about what are the driving principles for what we want to do at the individual level, at the developer level, at the user level, at the governing level, at the company level, and then that governance by countries ensures a level playing field, so there is a background or a bottom in which we all know what is the minimum requirements and minimum demands for the systems that are being developed. That will ensure that the public and consumers and users can have minimum expectations about what is being done. At the same time, it's not necessarily sufficient to develop and move the field forward. It guarantees the minimum requirements for safe and appropriate use and development of these systems.
  3. What are your thoughts on the ability for AI to have normative judgements and adding regulations to give the ability to governments to decide what the normative judgements are?
    This implies that we are able to agree on normative judgements, which is hard. Our ethics and norms lead to different decisions depending on time, place, and situation. This includes the normative interpretation of laws. That is why we need judges and juries. Leaving it to AI means that we risk a uniformity of decision that will not be able to take nuance into account. We would also need to decide on the accountability for the AI decisions that turn out to be wrong, who will be responsible for the damage?
  4. What frameworks can be put in place to ensure that AI solutions address socio-technical issues effectively?
    Some examples: auditing and monitoring mechanisms, assessments to check whether what you are doing aligns with existing regulations or guidelines. But most likely we need to rely on standardization.
  5. Are the environmental costs of AI possibly counter-balanced by the way in which the technology could accelerate climate change mitigation products and services?
    Not necessarily. At this moment, we know what the costs of AI are. (For example, I just read on X someone comparing the needs of GenAI to the power consumption of the entire nation of Germany.) But not only are energy costs huge; so are rare metals needed for chips and water for manufacture and for data centers. We know the costs but are much less certain of the benefits. These are often more wishful thinking than concretely measurable. But sustainability is more than environment and climate. We also need to consider the human and social costs, e.g. ghost workers and the greater digital divide.
  6. In regards to the proliferation of malicious AI-generated photos or videos, how can we inform the public to question their origin or to realize that they’re fake?
    This is very hard. There is a lot of work going on, but it is becoming a cat and mouse race. Best is to create awareness that ALL of what you read may be false and encourage people to compare sources. By the way, not all fake news is due to AI; we have had it for a long time.
  7. If organizations are implementing responsible AI in development or deployment of AI tools, are there tools that can help organizations evaluate whether projects align with responsible AI values amid several frameworks and initiatives? Is there a way to see their level of responsible AI?
    My research group is actually working on this. We have a tool (a very simple prototype is in rain.cs.umu.se) that allows organizations to assess their level of responsible AI and compare with previous levels or with the situation of others. Many other groups are also working on this and many are commercial, but you can also see the ALTAI tool aligned with the EU trustworthy AI guidelines.
  8. Do you feel that there is a need for a global AI lexicon that defines the terminology used in AI development and used by all stakeholders?
    There are some curricula on responsible AI. See also my own book Responsible AI. For the general public or school children, I recommend Elements of AI. But there are many more such instruments.


Learn how executives and team leaders can create an actionable RAI blueprint for their organizations at our new Responsible AI Executive Education Courses! And check out our other incredible AI seminars here.