Media Can AI Go to Jail? – Consequences for the Corporate Governance

Can AI Go to Jail? – Consequences for the Corporate Governance

uploaded May 16, 2023 Views: 112 Comments: 0 Favorite: 1 CPD

Artificial Intelligence developments face us with new problems which need to be solved sooner or later. This is also true for corporations, since according to legislation the board is finally accountable for the company's activities which include AI. 

Currently this issue is solved legally and ethically by stating that the techniqal solutions including AI should always be at the benefit of mankind and therefore the human species should be in control of AI in the broadest sense of the word. Regulators require that the AI supported or even fully controlled processes should always be explainable which is a challenge in itself. Ethical science sometimes even declare that it is a human obligation to be in control. If this cannot be warranted, humans should not use the AI instrument in the first place.

Although the authors fully agree that AI should always be to the benefit of humans, they also believe that the legal and ethical proclamations including the regulatory requirements regarding the explanability of AI is a risk exposure for human beings. If AI is true AI and start learning better and quicker than we do, our explanation capacity may fall short quickly. Morevoer our governance structures are not built for handling such problems in a timely fashion. 

The authors research the following question:

How can AI be sufficiently controlled considering the speed and the magnitude of its current and potential future development to ensure its contribution to the benefit of human beings and not exposing them to an uncontrollable risk exposure?

In this work the authors will explore current literature related to AI, legal framworks, corporate governance, regulatory requirements  and ethics to describe the state of AI governance. Potential shortcomings will be discussed and consequential risk exposures will be elicited.

In the second part of this research the authors explore potential solutions inside of the AI sphere. The question to be answered here is if AI can be run with a kind of an "included conscience" based on our ethical standards, which is able to develop itself quick enough and therefore protects ourselves against detrimental effects? We will explore a framework which is based on the network field model. This model was borrowed from the physics science.

The authors are aware of the magnitude of the problem which might not be just solved in one contribution. Therefore the authors plan a third part which will deal with the framework for potential further research is this area.

Content groups:  content2023


There are no comments yet. Add a comment.