One part of the AI is what is referred to as machine learning. IT systems recognize patterns and regularities and develop generalizations for new problem solutions from these “experiences”. This is done by means of complicated rules and mathematical models and (more or less complex) algorithms.Ethical, social issues surrounding the use of any AI
Every software provider should be aware that they should work with this type of software in an ethically and morally responsible manner, especially in terms of what is technically feasible using this type of software. Social questions must confront every new technology. The pure “because-it-works-mentality” must not be enough. This kind of mentality did not work with the atom bomb and it will not and does not work with potentially ubiquitous software. Therefore, it is important to also deal with potential dangers when it is foreseeable that our future in any area of life will somehow have to do with software – and probably artificial intelligence. We must come to the realisation that a socially relevant technology influences opinions – and not just since the advent of fake news.
The posts are all from the USA, the articles are in English, and they are summarised briefly in German.
Decoding AI for HRBill Kutik has been a real heavyweight in the HR industry for decades; his writing style is extremely entertaining, but the content is always highly informative. Kutik starts his article off with a comparison from the 80s regarding replacing monumental mainframes that stood around in data centres (mainframe architecture) with the distributed data management (client-server architecture) that came with the PCs. It still remains to be seen which manufacturer in the AI sector will make a breakthrough. Personally, I am excited to see whether it will be an individual manufacturer or whether times have changed here too.
In the following, Kutik introduces the digital assistant “Olivia” as an example, which is used, for example, for initial pre-selection questions to an applicant or to make an appointment. But, as he acknowledges himself, it is still far from actual artificial intelligence.
How algorithms rule our working livesThe Guardian article describes a whole series of examples that use algorithms to assess applicants or employees.
How can we stop algorithms from telling liesThis article by the same author explains possible causes and effects of “bad algorithms”, as the author calls them. Systemically, for example, discrimination can take occur due to unconsciously prevailing social prejudices – for which there are many contributions, contents, neo-German: Content found on the Internet – automatically generalized by machine-learning tools into patterns and regularities.
Can an algorithm be racist?The contribution from Luke Dormehl also deals with potential social dangers surrounding the use of algorithms – not especially in recruiting. The article mentions several research papers on the topic, including the researcher Dr. Noble, who researched Google’s search engine a few years ago and found it amazing. Its requirement is: “When designing technology for a society, you should have a deep education in humanities, humanism, and social science”
AI just beat top lawyers at their own gameI also regularly listen to articles like this one. Because of “only simple jobs” will be eliminated. Lawyers are on the losing list, too. Of course, this is just about a few work processes, a lawyer does so much more. But nevertheless it shows what upheavals our entire working world presumably faces. I very much hope that some completely new professions will emerge.
The conference will explore scenarios for applying AI in recruitment, what the advantages are, why standards are important and whether the many ideas are actually usable in practice in the future.
See you there?