Digital Paternalism. About Software and Its Impact on Human Decisions


Master's Thesis, 2017

72 Pages, Grade: 1,3

Markus Walzl (Author)


Excerpt


Table of Content

Introduction

Software
What makes Software „soft“?
Software and Hardware: The Body-Soul-Dualism reloaded
Analog versus Digital
Algorithm
Artificial Intelligence
Software as an autonomous actor

Human Being and Software
Ubiquity
Interfaces
Ease and convenience
Digital Nudging
Habit Forming
Gamification

Discussion
Awareness and free will
Decisions
Software and the Epistemic Basis for Decisions
Software and the Immediate Circumstances of the Decision
Software as a Technical System and Revolutionary Technology

Conclusion

Acknowledgment

Literature and Sources

Introduction

„The file is a set of philosophical ideas made into eternal flesh.” Jaron Lanier (Lanier, 2009: 9)

In his book ‘World Without Mind’ which was published shortly before the completion of this work, Franklin Foer argues that the internet companies called the ‘Frightful Five’1 undermined our free will and modified not just our behavior but also our thinking. He writes: ‘Facebook would never put it this way, but algorithms are meant to erode free will, to relieve humans of the burden of choosing, to nudge them in the right direction.’ (Foer, 2017).

Foer’s warning comes in the midst of a new wave of criticism aimed at digitalization.2

While until recently European managers and politicians were obligated to make the pilgrimage to Silicon Valley, in the environment of today warnings of the dominance of the internet giants and the presumed omnipotence of algorithms are hard to miss. The critics gain momentum with statements by Silicon Valley icons like Elon Musk or the founder of Napster and Facebook’s former top-manager Sean Parker, who explains in an interview in early November 2017: „Facebook was designed to exploit human "vulnerability"“ (Parker, 2017b) and „only God knows what Facebook does with the minds of our children“ (Parker, 2017a).

Are we bound to become creatures lacking willpower and being tied to algorithms, themselves only the precursors of a superior artificial intelligence that will dominate the world and subdue or terminate us?3 (Musk, 2016)

While the critics’ warnings get louder and shriller, the proponents of an unrestrained digitalization unabatedly promise nothing less than a solution to the biggest problems of humanity. (Kurzweil, 2012)

It is my intent with this work to take a sober look at software as a basic component of digitalization and to expose it to criticism relating to a possible influence on human freedom of decision-making and autonomy.

Software is the basic element of digitalization. In most instances, it is embedded in objects and systems and usually completely intransparent, so it is often only recognized when it doesn’t function. (Harman, 2010)

Consequently, software seems inherently magic by nature. It works in ways that for many are not comprehensible and produces complex results. Magic, however, by definition circumvents our intellect and produces euphoria or fear, while not allowing any consideration that is more differentiated.

Software is a technology that is conceptually but also experientially hard to grasp. It is tool and language at the same time and features hints of vitality that put it in the vicinity of the human mind.

At its core, software is a control technology consisting of instructions and decisions. This characteristic is also at display when interacting with us – it controls FOR us, but also ourselves because of its nature to control.

The bandwidth of interactions with software and a possible impact on our lifestyles is huge.

An influence takes place at several levels: First, software is a tool that individuals use intentionally to control other human beings. Second, software acts as a carrier (vector) of adjustments, biases and decisions for their production conditions. Third, it influences the results through its own statement structure, for example through selecting the data to be processed, and fourth, some authors also attribute an “active” agentivity (Beck, 2016) to software.

Knowledge in behavioral economy and psychology used in software development have led to the deployment of a number of techniques aimed at controlling the users based on an idea of humanity characterized as reductionist and behaviorist. This widespread notion of controlling people with software through the design of decision-making situations has inspired the title „Digital Paternalism“ of this work. For Gerald Dworkin every „Interference with a person’s liberty for his own good“ (Dworkin, 2017) is paternalism. In this sense, the many personalizations of my user behavior for the purpose of better usability or the content adaptations without my consent can already be considered as paternalism.

Paternalism is tightly interwoven with the term autonomy which I use as a point of departure in my work. I explore the term from the angle of an attempt to limit our autonomy through software and thus am expanding on the traditional definition.

Since software is mostly advertised as a form of assistance or solution to our problems and challenges, hence our personal gain is highlighted, the use of the term is justified. In the public debate, the impression emerges that we are hardly capable of deciding for ourselves but are controlled to an increasing degree. Our freedom and power of judgement recede into the background and the question emerges if we can even be held responsible for our actions. I am of the opinion that „the algorithm made me do it“ never can be a tenable excuse.

For the sake of this argumentation, I will also include the terms of free will and decision which must be seen as prerequisites for responsibility.

I am going to argue that our sphere of responsibility in principle does not decrease but rather increase with the increasing digitalization. However, I will also note that considering the type and timing of the influence via software, this statement begs a critical analysis under concrete circumstances.

As a counter-position to patronizing, software can also be viewed as a necessary decision-making tool in everyday life in the 21st century.

The possibilities to design the conditions of our decisions have multiplied with the almost ubiquitous use of software compared to the analog world and so have by extension the possibilities to influence the decisions per se. With the new possibilities, technology also creates new situations of decision-making that bring with themselves new, ethical requirements.

Moreover, I will attempt to demonstrate that the perception of our humanity is substantially changing with the massive spread and use of software and due to our interactions with software. This does not remain without influence on our individual and collective normative framework of judgment. Every analysis must consider this aspect.

Currently, there is no comprehensive theory of software-human-interaction which in turn could not offer a universally valid assessment. An ethical analysis must occur on a case-by-case basis.

Under the premise of a naturalistic, under-determined conception of the world, we remain self-determined beings responsible for our decisions and actions even under the influence of software.

Software

„Software is everything. In the history of human technology, nothing has become as essential as fast as software.” Charles Fishman (Fishman, 1996: 95)

What makes Software „soft“?

“Software is a great combination between artistry and engineering.” Bill Gates

Software is at the center of digitalization. It is not about Smartphones, (nor) server farms, fiber optic cables or the ubiquitous sensors. All this is inanimate matter without software.

Software is a description of the world we live in, the world we dream of and ponder about. Its inception is similar in significance as is the invention of the written word – being able to write up thoughts for future generations, so there suddenly seems to be no limit to what can be calculated, described, and simulated with software. We can create our own worlds with it. At the same time, it is the language of our electronic devices, the way they communicate with each other and how we give our instructions.

Under the paradigm of complete descriptiveness of the world, we have confidence in ourselves and our new tool ‘software’ – to make us immortal, just like the total enslavement and dominance of the entire humanity.

Like with all big inventions, we recognize ourselves in our artefacts – we see ourselves as machines which run our conscience like an operating system. Conversely, we assume that we created those machines we built in our image. (Liessmann, 2017)

For a long time, network effects of digitalization were discussed. Terms like network society were the focus, the theories aimed at describing the internet. It was not until the discussion shifted to the algorithms in social media applications that software was a prime consideration again.

Software is not a uniform set of rules, however, more homogeneous than human languages. Few program languages and frameworks and even less operating systems determine how digital procedures are developed and implemented. The procedures that are currently in use strongly draw from neurobiology and our model of the human brain, especially when it comes to basic human abilities, like modeling the recognition of patterns or the recognition of language, with the goal to create ‘self-learning systems’.

It is not the focus of this work to describe the operating principles of computers. However, I would like to define a few terms that I consider central and try to describe the nature of software and its properties as they relate to the topic discussed here.

In 1936, Alan M. Turing develops the concept of the Turing machine in his essay ‚on computable numbers’ (Turing, 1936). Herein he is not talking about a physical apparatus but rather the operating principle of all computers built since then and conceived as ‚universal machines’. This concept allows to solve all mathematical problems that are solvable using an algorithm.

To this day, the term software lacks a standardized and clear definition. Software keeps being described as the set of instructions that convey to a computer what it needs to do. Software determines, what a machine controlled by software is doing and how (maybe comparable to a manuscript) (Freund, 2006) . In common parlance, the term software mainly refers to programs. In summary, the technical definition of the term ‚software’ can basically be understood as „all the information that has to be added to the hardware to make the hereby created computer system usable for a defined spectrum of tasks“(Wikipedia, 2017a) .

Up to the 1950s, software and hardware were connected and perceived as an entity. Software was a part of hardware and was called program code. In 1958, the statistician John W. Tukey introduced the term software [4] for the first time . (Freund, 2006)

In the German language, there is no real equivalence for the generic term ‘software’ – one has to look as far as programs to find an equivalent. The term operating procedure describes the broad semantic meaning of the English word ‚software’ reasonably well.5

In the meantime, the English language often uses the term ‚code’ which specifically refers to software parts that contain a specific operating procedure or solution to a problem.

Programs are packages consisting of code, i.e. operating procedures that have been generated for specific tasks and that complement each other.6

This extremely short historic and etymological synopsis will resurface throughout the course of this work – especially in the context of separating hardware from software, the associated worldview and the reconnection with our perspective on the human mode of function, especially the human brain.

I would also like to demonstrate that we are not dealing with a phenomenon of the 21st century. The digitalization and modeling effort of the world through software on computers dates back to the 2nd World War and its development goes hand-in-hand with cybernetics.7 The resources for the huge advances came from the military. The decoding of the German encryption and the control of anti-aircraft canons as the main application of informatics (which today would likely be called ‚killer applications’), which back then was still mathematics and only later morphed into its own discipline (information + mathematics = informatics), have taken a powerful hold as foundation myths.

Even though it appears today as if we could give a machine instructions in an individual manner using language, which then would be translated, it does not reflect reality on a code level. The use of programming language following fixed rules is obligatory here.

All programming languages are formal languages for the formulation/expression of data structures and algorithms and follow a mostly strict syntax. The list of programming languages is long, however, a handful of languages8 that have gained acceptance, can be seen as foundational9. (Harper, 2016)

Programming means to write FOR a computer in order to make it execute actions.

„Programming involves a process of writing for machines, of inscribing in their functioning certain patterns, interfaces and logics which in turn condition its user’s possible interactions“ (Reigeluth, 2014: 245).

Here, Reigeluth points to the complex, alternating condition when programming software, which I would like to expand on.

Jaron Lanier, who himself was a developer for a long time and a pioneer of virtual reality and digital music, taught informatics at several universities. He sees software as a reduction of options to the possibilities of the programming language itself, which means to adopted representation models and standards. From his own area of expertise, he mentions the midi-standard for digital music which has been in use for many years and stands in the way of real improvement. (Lanier, 2009)

Like Beck and Sedgewick (Sedgewick & Wayne, 2011), he sees an intense structuring of the world and its perception due to software. He calls the measures that are laid out and that exclude many other options in the future „lock-in“. The impact of digital tools on the outcome in some areas outpaces that of analog tools. (Lanier, 2009)

The spread of program languages is global in the meantime. There are no local dialects in the traditional sense, even if due to the high performance expectations, the big internet companies have developed something like their own variants in the last couple of years, which then often have been made available for all as open source.10

Actually, the further development of these ‚languages’ is surprisingly slow as compared to hardware.

A basic structure of software is the program line which generally contains a single instruction with its parameters. The execution is line by line and therefore the number of program lines is frequently given as an approximation measure for the scope and the complexity of a software project. The mobile operating system Android has approximately 12 million lines of code, MacOS approximately 90 million, Facebook approximately 70 million in the year 2013, a modern car has more than 100 million (McCandless, 2015). In the year 2015, according to Google, the entire source code of all Google services comprised two billion program lines and a size of 86 terabyte.11 (Potvin, 2015)

„Coding“ – the activity of a programmer – is often a process of „assembling“, hence, the arranging of existing program lines. Little code is written completely from scratch and often concrete tasks are solved by assembling existing modules that circulate on the web (this can be anything, from a simple web form to basic components, like protocols and interface instructions, to learning instructions for a neuronal network) and that are adapted or simply woven into a bigger software project. It works like collecting music and video files. (Lanier, 2009) Just like especially successful texts or music pieces often are reused and bring their creator’s recognition, so are especially efficient and highly-performing code fragments almost celebrated as ‚pieces of art’ and praised for their style in their respective environments.

The possibility to improve and revise a piece of software anytime and anywhere in the world, has fundamentally changed the attitude towards software design. Production conditions for consumer software in particular are strikingly different from those of real goods. Software is not a classic product anymore that is delivered as a finished product in its final form. Each delivery state is only a snapshot.12

Based on this observation, Lanier postulates a lack of humbleness in software development. An aircraft engineer would never put someone in a plane that was based on untested hypotheses. Software engineers would do this all the time. (Lanier, 2009)

The structure of software is layered. Between the layers, there is repeatedly translation from one program language into another, from the so-called source text to the executable program code. This layering is also reflected in the view and analogy with the human being. We are not comparing the computer with us humans anymore, but the other way round, we use computers as metaphors and models for an understanding of the human being.

The brain is the hardware (processor unit and hard disc), the spirit is the software, our sensory organs are the sensors (this is where the word comes from after all) and the body is the electro-mechanic machine, receiving the control commands.13

Software is immaterial, consisting of languages and notations it is written in. Although software can be saved on certain media, can be printed, displayed, or transported, this does not represent the software, but does only contain it.

Although it is conceivable to visibly and tangibly deposit bits on a carrier medium‚ ‚software‘ is generally an abstract term that is independent of carrier media. This applies to the generic term anyway, but also for any concrete shape or form, like a certain application program.14

The electronic process, the individual bits in the machine, is not visible and not relevant for us. It is not comprehensible and cannot put into context with the results of the calculation and executed instructions. (Kitchin & Dodge, 2011)

This is fundamentally different as compared to mechanics, where even for complex devices, the connections and linkages (that to us are the epitomy of causality) are comprehensible to us.

Few people would come up with the idea that the measurement of the potential differential on the computer circuit board could indicate which tasks are being executed by the software.15 Seen from this perspective, the neurosciences are working with a „reverse engineering“ of the human mind by measuring the activity of neurons in the brain. (Kurzweil, 2012; vom Brocke, Riedl, & Léger, 2013)

Software and Hardware: The Body-Soul-Dualism reloaded

„You can mass-produce hardware; you cannot mass-produce software – you cannot mass-produce the human mind.” Michio Kaku (Kaku, 2011)

Although software is generally invisible, it produces visible and tangible results. This puts it in proximity of the human mind. Software likely does not possess awareness16, however, it displays characteristics of liveliness. And just like awareness, software is not directly tangible. But it is more than language, we can only recognize it by its results or in the interaction with us.

Shaun French and Nigel Thrift describe software as “somewhere between the artificial and a new kind of natural, the dead and a new kind of living” with a “presence as ‘local intelligence’” (Thrift & French, 2002: 310).

This is remarkable since it would mean that software could take care of things autonomously, receive and process Capta17, evaluate situations, make decisions, or operate without human supervision or authorization.

In Adrian Mackenzie’s eyes, software is even a „secondary medium“. (Mackenzie, 2006)

In 1995, Nicolas Negroponte spoke of „moving bits, not atoms“ (Negroponte, 1995) and solidified herewith the idea of bits and atoms as essential building blocks of the world – the physical for one and the digital for the other – as paradigm of informatics. The thinking behind it is that digital information is virtual, etherical, basically defying the laws of the physical world and therefore would be more similar to mind and soul. (Reigeluth, 2014)

Hence, this idea follows more the tradition of ontological dualism which is most prominently represented by René Descartes. It also faces a problem that is similar to the body-mind problem: Are there interactions between body and mind, respectively software and hardware? How does this happen? And where exactly in the machine or in the brain does it take place? (Beckermann, 2012)

These questions are the focus of the philosophy of the mind which draws increased attention with the discussion about artificial intelligence and the advance of the neurosciences.

The „Computational Theory of Mind“ investigates if machines are capable of thinking and if the human mind isn’t itself a thinking machine as well.18

The question of awareness will be explored further but at this point I find it important to note that we require first an analogy between the human being and a computer and second the conceptual separation of software and hardware, to even entertain the idea that machines might develop human awareness or that human awareness could be digitally saved as software.19

Analog versus Digital

„Information systems need to have information to run, but information underrepresents reality“ (Lanier, 2009)

Another seemingly unsolvable dichotomy penetrates the discussion: Software with its digital basic structure would be a rarity in nature, an entirely artificial product of the human being in the 20th century.

For Jaron Lanier there is no real contrast between analog and digital. In his mind, the dualism is a construct because in the end even analogs can be traced back to discreet particles or elements, citing examples like the magnetized molecules of a tape recorder or the individual silver molecules of an analog film (Lanier, 2009). Others argue that the dualism would be constitutional for nature and refer to wave/particle-dualism in physics (Hürter, 2016).

Both, however, observe more digitalization in the analog world than what our everyday language use would lead us to assume. It seems contrary to us and our worldly experience that everything could be calculated in 0 and 1.20

Lanier’s argument that the inner workings of a computer were „dirty“, meaning that not just one or one clearly defined but a number of different tension states exist, is invalid. The decisive aspect of the digitalization success is abstraction: A switch is „off“, even if there is a small amount of tension. A threshold value decides between 0 and 1, not a completely clean, analog implementation.

Our DNA works in similar ways – as it is based on base pairing, it is also a digital medium – a code like software und it also requires a runtime environment for its execution. Actually, it is possible to solve computational tasks with DNA. In 1994, Leonard Adleman built a prototype of a DNA-computer in a test tube. The free reaction of the DNA could solve simple mathematical problems.21

The binary character of software at the lowest level asserts itself in the results and representations: It is a lot easier to tell a program it should run or not, than to say it should run a little.22 Even if we as users often have the impression that a program does not run as it should.

Likewise, it is easier to find a rigid representation of human relationships in digital social networks. Here, every user assumes a pre-defined status. Communication that is reduced to a few categories becomes reality. (Lanier, 2014) Software always assigns categories mostly using virtual attributes that a person individually would not volunteer or generate but choose from a multiple-choice question or they are calculated.

Algorithm

„An algorithm is a space of possibilities transformed into a predictable and calculable temporal sequence.” (Reigeluth, 2014: 250)

The software term in this work goes beyond being defined by algorithm which has become a synonym for solution-oriented software; However, algorithms are central when arguing the patronizing via software because they can calculate solutions based on the mathematical modeling of decision-making situations.

An algorithm23 is a clear operating instruction for solving a problem or a class of problems. Algorithms consist of a finite number of well-defined single steps. (Wikipedia, 2017b)

Algorithms are well suited for an implementation through software. However, an algorithm does not have to be digital. Process instructions are classic operating instructions in algorithmic form.

In our situation, an algorithm solves a mathematical problem and describes an approach that the computer can interpret correctly and that can calculate the correct solution for every possible input defined by the mathematical problem in a finite amount of time. (Zweig, 2016, 2017)

Algorithms solve different classes of problems, like for example the group of optimization problems. They calculate a number of categorical solutions based on a set of inputs and then define a cost or profit function for every possible solution. Then, the solution with the lowest cost is selected (or the highest profit).24

The determined nature of algorithms and the expectability of results suggest that the future can be entirely controlled. Under the premise of complete computability of the world, the future is being reduced to the possibilities that are laid out in the now. The calculation IS the future, as per thesis. Bruno Bachimont sees in software

„un dispositif réglant un déroulement dans le temps, le calcul ou l’exécution du programme, à partir d’une structure spécifiée dans l’espace, l’algorithme ou programme. L’algorithme spécifie que, les conditions initiales étant réunies, le résultat ne peut manquer d’être obtenu, selon une complexité donnée. Le programme est donc un moyen de certifier l’avenir, d’en éliminer l’incertitude et l’improbable pour le rapporter à la maîtrise.“ (Bachimont, 2008: 10)

This determinancy of software invites problems: on the results level within a digital environment there is no room for unexpected results. It is the task of engineers to exclude exactly these. Moving in a programmed environment, the world is perceived like in the movie „Tron“ from the 1980s, where movement is only possible in preexisting trajectories and behavior in an exactly defined framework.

Repeatedly, in public discussions algorithms are attributed an active operational role, especially when decisions are traced back to an algorithmically calculated result. The traditional opinion is that algorithms alone are nothing else but instructions that could be executed by their developers, except not with the speed and at the scale that computers can handle. In this sense, algorithms can already be seen as a sort of an enhancement of the human being because they allow someone to have their own operational instruction executed to the millionth power. And all this without physical presence, fatigue-free, and without the possibility of human error.

Kathrin Zweig calls algorithms „frozen operational instructions based on the ideas of some individuals, executed million- or even billion-fold and independent of time and space“ (Zweig, 2016).25

She expresses very emphatically that the responsibility lies with the humans who develop algorithms and those who implement and further use these algorithms as building blocks in their programs.

Algorithms are readily brought into play when objective and fair decisions are demanded, especially when decisions should be made that impact many people, like for example in the areas of justice, security, and politics, anywhere we expect equal treatment. Hereby, algorithms are attributed characteristics like neutrality, objectivity, infallibility.

In the literature, decision-making often is termed ADM (for „Algorithmic Decision Making“) consisting of the following components:

1. Developing processes for data collection, 2. recording of the data, 3. developing algorithms for data analysis, 4. its interpretation on the basis of human-designed models of interpretation, and 5. acting automatically by deduction from this interpretation using a human-designed decision-making model. (Algorithm Watch, 2017)

In reality, they can of course be fundamentally erroneous: it is entirely possible that an algorithm does not solve a problem but calculates solutions that don’t correspond to the specification via the mathematical problem, that the algorithm does not finish the calculation or produces the wrong result for some of the input.

Likewise, the implementation can be flawed when the algorithm is not given the correct input or simply the wrong algorithm is selected for a given task.

Also, an algorithm always has to match the chosen modeling of the problem. Questions and tasks that should be solved algorithmically, first have to be modeled mathematically.

This modeling of a problem is done by humans who here also are contributing their own world view, experience, and their own pitfalls. As previously with design and implementation, modeling of tasks and problems involves not just one possible solution but always different solution variants. The decisions made in the process have significant effects on the results and decisions cannot always be guided by the desire to choose the most suitable model. (O’Neil, 2016) The use of resources for computation, running time, and available data for modeling are only a few among many parameters that are considered especially with commercial modeling. Often a particular modeling is chosen because it is suitable for this situation and complete algorithms that are tested and trusted are available.26

Mathematical models for specific real problems are not always easy to develop and often it is easier to use an existing model that had been developed for a completely different purpose because it serves the desired purpose to a satisfactory degree – algorithms included: spam-filters identify in modified form HI-viruses, epidemiological studies proof useful for the prediction of the commercial success of movies in theatres.(O’Neil, 2016: 32) This is per se neither dangerous nor undesirable. What should be part of the thought-process, however, is that with every transferred model there is a possibility to adopt something that can have undesired side effects in the target application.

If and in what instances the calculated value is incorrect due to modeling often is revealed later. As long as it can be assumed that no mathematical errors occur, the result represents a pretend-objectivity of the computer.

„To create a model, then, we make choices about what’s important enough to include, simplifying the world into a toy version that can be easily understood and from which we can infer important facts and actions. We expect it to handle only one job and accept that it will occasionally act like a clueless machine, one with enormous blind spots,“ writes Cathy O’Neal (O’Neil, 2016: 17)

Her description of modeling shows that with complex issues, many subjective decisions have to be made before a problem can even be solved arithmetically. (O’Neil, 2016)

The user who has no insight into neither modeling nor algorithm or implementation but sees only a value or just a red or green light to base his decision on, has to trust that no significant errors have occurred during the entire development process.

Considering all the mentioned aspects – a chain of human, individual decisions about design, implementation and modeling and its proneness to error and bias27 and the assumption that even algorithmic decisions are considerations and often contain at least heuristic elements – the hope for objective and „just“ decisions through algorithms already has to be seen in a critical light with this first chapter.

According to some definitions, ADM is already part of the most-discussed software-discipline, the so-called artificial intelligence.

Artificial Intelligence

"It is everywhere. Artificial intelligence will permeate everything, at the store, in the car, at the doctor’s office"

Sepp Hochreiter, Head of the Institute for Bioinformatics, University Linz

Actually, there is hardly an electronic device on sale anymore that is not advertised as being controlled by artificial intelligence. There is no general definition of the term and widespread is also the acronym AI. Usually, it is about a part of informatics with focus on enabling software to independently solve problems. This approach is also termed weak AI.

If the goal hereby is to create a human-like awareness, the term strong AI has become standard. The birth of modern AI-research is often connected with a summer seminar in Dartmouth in the year of 1956, where Marvin Minsky, John McCarthy, Nathaniel Rochester, and Claude Shannon set out to fundamentally clarify within two months how human thinking could be simulated with a computer.

In contrast to strong AI, weak AI aims to master concrete application problems of human thinking. Here, human thinking should be supported in individual areas. Ultimately, weak AI is about the simulation of intelligent behavior by means of mathematics and informatics. (Wikipedia, 2017a) An integral part of AI-systems has to be the ability to learn which cannot be added retroactively. The system also has to be able to deal with uncertainty and probabilistic information. (Russel & Norvig, 2012)

In particular, the further development of so-called neuronal networks and based upon it, methods of machine learning („deep learning“) which follow models in neuroscience for the human brain and human learning brought remarkable progress in disciplines like pattern and language recognition.28

Most recognized in the field of artificial intelligence research are „learning“ algorithms. The algorithms themselves hereby are not modified, they build a decision structure which, however, is influenced by the respective data available for processing. The algorithm is „learning“ with the help of this data and is being „trained“, as this process is called in informatics jargon. Only in a second step, the new data is then classified and categorized. Image recognition algorithms work in this way. First, they learn what a car looks like and then they categorize objects they are presented into „cars“ and „not cars“. Doing so, mistakes happen: When actual cars are not recognized, this is called „false negative“ wrong decisions, and when objects are classified as cars but do not belong, then this is called „false positive“ decisions.

The focus of such algorithms is controllable, depending if it is more important to recognize all cars as such or to avoid a false recognition. The decision between sensitive (all cars) and specific (no non-cars) is mostly a trade-off, since often one comes at the detriment of the other.29

In various aspects, artificial intelligence is relevant for the topic. First, for the reason that in its weak form it is the underlying technology for increasingly more decision-making, proposition, and consultation applications.

Second, because it changes human self-awareness. In light of the narrative of machines that are superior to human beings and an approaching super-intelligence, in reference to the Promethean shame, we feel incapable of making decisions without the support of software and thus relinquish our autonomy.

And third, AI builds the technological basis for the simulation of human communication and interaction, which incites us to trust software, as if it was about human beings or at least living things. One of the first experiments with still very simple technology, Joseph Weizenbaum’s simulation of a psychotherapy session30, revealed the tendency to trust machines without scrutinizing.

Software-simulated interaction is not required to pass the Turing-test to be recognized as human –human elements suffice.

Software as an autonomous actor

„Algorithms are conductors orchestrating interface happenings. They make things happen and affect change within machine processes and human behaviors.” Estee Beck (Beck, 2016: 8)

The question if software can be attributed agentivity is controversial.

If conscious, intentional behavior is assumed31, then the question is almost identical with the development of the strong AI. Under this premise, a majority of authors would currently reject agentivity. However, what if the definition is less strict? Weak AI is developed with the goal to solve problems and to use all resources doing so. Accordingly, there is a clearly defined goal, even if the solution strategy is not always clear in the beginning. Is this software an autonomous actor? Christopher Noessel pragmatically sees a new class of software that independently performs tasks per user instruction while the user is doing something else.32

Estee Beck approaches the topic from the angle of „rhetorical studies“ about the question if and what kind of influence through software could even occur. This persuasiveness is a precondition to see software as an agent.

“Whatever views a person or organization holds about algorithms, make no mistake: Algorithms are conductors orchestrating interface happenings. They make things happen and affect change within machine processes and human behaviors”.

With linguistics and speech act theory on her mind, she perceives software code as language objects and as quasi-rhetorical agents with persuasive abilities.

She underlines that due to their performative nature and cultural values and convictions that are embedded, respectively coded into their linguistic structures, computer-algorithms are persuasive. She calls them persuasive because of their association with the ability to affect thoughts and actions. Like Lanier, she believes that software represents quasi-objective structures in which the generation of an algorithmic structure was based on the knowledge and experience background of the generator(s) and ideology bias always would permeate the structure.

She goes further, however, by comparing software with language and attributing software a considerably stronger performativity in comparison. In the same vein, Hayles writes:

“When language is said to be performative, the kinds of actions it “performs” happen in the minds of humans, (…) these changes in the mind can and do result in behavioral effects, but the performative force of language is nonetheless tied to the external changes through complex chains of mediation. By contrast, code running in a digital computer causes changes in machine behavior and, through networked ports and other interfaces, may initiate other changes, all implemented in the transmission of code.”(Hayles, 2005: 49-50)

Simply because of functionality and performativity – precisely causing changes with humans and machines – an interpretation should always include the machine and human level. Just like language, software cannot exist without context. Code also cannot exist without storage medium, runtime environment, compiler, and hardware. Software and language display parallels. Computer code, however, has properties that exceed the spoken word and text. At the same time, ambiguities as they are typical for the human language, are foreign to the software code – for the machine, everything has to be reduced to 0 or1 in the end.

It seems to me that it would go too far and compare to an anthropomorphic view to attribute software code an active agency role. However, I find it plausible to see it as an extension of the „agency“ role (Introna, 2011: 117).

In the example of algorithm development, the intentions and designs of mathematicians and programmers are furthered. With the execution of the code, these incorporated „agencies“ are then interwoven into the new context. Introna speaks of „encoded agency“ (Introna, 2011). This agentivity could be far-reaching and have a much stronger impact on us than what has been theorized so far:

„We may become to think of algorithms as quasi-agents carrying forward the agency of human symbolic action. But, the changes algorithms produce and affect as a force go deeper than agency and cut at persuasive design.” (Beck, 2016: 7)

At this point, the substance of this work allows to summarize three more points as software-inherent aspects with respect to influencing our decisions:

1. Software as Persuasive Element

Software can be seen as a systematic way to process and organize information in order to reach a certain goal. This logic plays a part in determining how people and machines experience the world around them. For the reason that algorithms determine logical procedures for action, they embody persuasive functionalities. Sequences determine, how and what data a machine or person is collecting and processing for it to fit with the logic of the instruction. As suggested by Kevin Brock in his work, algorithms control thinking and actions of human and machine like syllogisms. (Brock, 2013)

In general, software is designed to control processes and effect change – human behavior included. Rarely does software create something directly but it processes and distributes instructions. Software is pure manipulation, without the intent to make a normative evaluation.

2. Inclusion/Exclusion

Software, in particular algorithms have a basic design of inclusion and exclusion built in: Only the data that is necessary for the operation is accepted in the first place and then only in a certain form.

3. Ideology/Bias

As insinuated in the above, software can be perceived as quasi-ideological structure.

Another interpretation in the field of behavioral economics mentions „software bias“ alluding to the same phenomenon, which is the incorporation of perspectives and convictions of the programmer or modeler into the software, respectively its implementation and its continued effect. In this context, the assumption often is that this is an implicit, non-intentional process. (Ziewitz, 2016)

Human Being and Software

„Software is a compelling urgency, the very stuff out of which man builds his world“ Joseph Weizenbaum (Weizenbaum, 1976: 20)

In this chapter, I intend to demonstrate the degree to which software with all its possibilities but also its uncritical and inherent characteristics has captured our living spaces. First, using the description of fields and applications will show the extent of human-machine-interaction and possible interference which is at the core of my work. Subsequently, I will describe the most common methods.

Kitchin and Dodge distinguish four levels of complexity in the use of software:

They define „coded objects“ as things that either make use of software to function or need software to allow being read. A good example is a CD.

„Coded networks“ are networks connecting „code objects“ or networks that are controlled or surveilled by software, like telecommunication, data, gas, lights, to the local network of a car.

The authors describe the third level as originally analog processes that have been digitized, like for example withdrawing money. In this case, they speak of „coded processes“.

When coded infrastructure together with coded processes make an entire system, then they call it „coded assemblages“. An example is air traffic as interplay of processes and networks of ticket sales, security controls, check-ins, of luggage handling, air traffic control, aircraft control, etc. (Kitchin & Dodge, 2011: 6-7)

When looking at all four levels of software application, it becomes obvious soon that we are highly exposed to software and its effects. However, only in very specific cases this exposure is a subject of discussion.

Ubiquity

“If the computational system is invisible as well as extensive, it becomes hard to know what is controlling what, what is connected to what, where information is flowing, [and] how it is being used.” (Weiser, 1999: 694)33

The researchers at the famous Xerox Palo Alto Research Center, called PARC for short, one of the precursor institutions of today’s Silicon Valley, said in the 1990s that digitalization was really advanced when we don’t recognize the computers surrounding us, when we use them unconsciously to pursue our everyday activities. For this, Marc Weiser coined the term „ubiquitous computing“ (Weiser, 1999). At least a part of the population in Europe, the US, Asia and in the world’s largest cities seem to have reached this point in 2017.34

Actually, there are hardly parts of our everyday lives that are not touched by software, seen from the viewpoint of directly interacting with software supporting us or our activities through software, respectively using products or services that are controlled by software or the production of which was controlled by software.

Our labor world is hardly imaginable anymore without the use of one or more computers. We write, calculate, read, and communicate with the help of software. Industrial robots and automation processes in production are widely in use. Developments in AI, efficiency, dropping costs and scalability are characteristics that lead to the scenario where software not just supports anymore (like with calculation, communication, archiving, etc.), or controls work processes (disposition, job placement, platforms like Uber or Lieferando), but increasingly takes on independent tasks from humans in the sense of an agent. Software is already discussed as so-called „cobot“, a colleague and member of the team. (Frick, 2015) Personnel selection is an example. In the meantime, in Germany also, web platforms are used to search and contact candidates (examples are Jobspotting, talent.io). The entire bandwidth of criticism towards algorithmic decision-finding comes into focus in the discussion around actual personnel selection via software:

What type of an applicant’s data is collected for analysis and what method is used? Are answers given voluntarily or is data of unknown origin also used?35

How exactly do selection algorithms work? What mathematical model based on what underlying assumption of personality and behavior is behind it? Is it about a validated personality test or do the selection mechanisms remain secret? Was there something explicitly developed or were modules from other areas adopted? What data base was used to „train“ the software?

There are several examples in the literature that point to „software bias“. When it comes to personnel selection via software, ethnicity, gender, and place of residence would play a role, especially if the software was calibrated using real data without human correction. (Carr, 2015; Christl & Spiekermann, 2016),(Rid, 2016; Rosenberg, 2013)

Consequently, at least one of the driving arguments in favor of using recruitment software – in addition to efficiency and scalability – namely the elimination of a recruiter’s bias does not seem to be fully satisfied. Though studies about biased human decision-making when selecting and evaluating other human beings support the use of ADM in hopes of more objective decisions36 (Meier, 2017), in my opinion this hope has no satisfactory base in light of what has been laid out here. One might exaggerate claiming that the bias of an individual is replaced by the bias and interest of the many whose identity and exact impact on the decision is getting blurred because they are barely traceable anymore via the process chain generating the decision-making tool and implementation. My intent is not to argue against the use of ADM but in favor of a realistic expectation. I merely plead for an awareness of associated issues.

The financial sector has always been close to mathematics. Hereby I do not only mean sophisticated risk models for complex finance products, but also the normal payment transactions amongst banks and private individuals using cards or payment service providers like Paypal, which is of course software-controlled. A discredited sector is high frequency trading where supercomputers independently or with human interference act within seconds down to the microsecond level according to previously programmed algorithms. These react to market changes and make trade decisions accordingly.37 Automated credit approval also became a target of criticism.38 The frequently cited and well-documented example involves a man of Afro-American ethnicity who was denied by a car loan service because the credit rating was based on the color of his skin. The same intransparent chain of biases as mentioned above with the example of personnel selection applies here. (Greenfield, 2017; Schlieter, 2015)

Software controls medical devices in diagnosis and therapy, manages patient records, optimizes hospital stays, controls rescue missions, and process insurance benefits. Private insurances have started to collect and analyze their members’ vital data to be able to better recognize their risk of disease and to adjust the premiums accordingly. The relatively low cost of software development and easy applicability to a large market allow the burgeoning of medical software solutions that are associated with health in a much broader sense but outside of the regulated public health sector. An example is „Precire“, which is a software that analyzes a person’s character based on speech samples. The results are given in the form of a percentage match with the five main personality dimensions in psychology. (Breit & Redl, 2017)

Nowadays, many people take part in a plethora of algorithmic analyses on their own and on their free will as users of fitness trackers, Smartwatches and other so-called „wearables“. These contain smallest sensors or devices equipped with a computing unit and can be worn close to the body or on clothing. The personal motivation hereby can range from occasional use of a GPS-device while hiking, wearing a Smartwatch as status symbol to complete self-measurement. The followers of this trend called „quantified self“ use apps to observe and measure their sleep, movements, weight, each gram of nutrition they ingest – and even the CO2-content in the air they breathe. Motives are the optimization (whereby the optimum remains an asymptotic term and therefore the term of a permanent improvement would be more fitting) of a physical or other isolated activity, to a comprehensive improvement of their own sleep and their own lives. In the latter case, this is termed lifehacking.39

It is a permanent tinkering with oneself. Someone who tinkers with machinery and software is doing this with himself if he perceives self as a machine that can be improved. With the measurement of the own body and the comparison with all other people who have access to the internet, it is very likely to see oneself as a suboptimal stimulus-reaction-system. (Pasquale, 2015)

This practice that seems bizarre at first sight nurtures solid arguments for the discussion of the issue of patronage through software. The first argument refers to the interaction with the sensors’ evaluation software itself: it is advertised as a user-friendly colorful app that gives us feedback to „our“ data using simple graphs, images, and sounds. Everything said so far applies going forward – including those applications that appear like a blend of slot-machine and toy, using algorithms with unknown origin but the feedback of which we mostly take very seriously and that make us adapt our behavior. The colorful images on our cell phones often are the basis for the reasoning behind our decisions, which are considerably less trite as it may sound: We are not going to the movie theater because we want to complete a training unit to fulfill our daily program. Maybe we do not take the car and ride the bike because we participate in a „challenge“. Due to networking, a singular decision turns into a mass phenomenon with unpredictable repercussions.

The second argument focuses on the sociopolitical aspect: The voluntary nature of self-tracking could easily turn into a duty or societal expectation, as Steffen Mau warns in his book „Das metrische Wir“. (Mau, 2017)

The key proposition that Mau develops in his book talks about the "quantifying assignments of status rankings" (Mau, 2017) changing the disparity order. Things that have been non-comparable, like health or attractiveness, become comparable and are put in a hierarchical relation. Numbers suggest a minimum of objectivity.

However, it is not just about an apparent objectification of societal comparisons but at the same time "the competitive modus of socialization is being strengthened" (Mau, 2017). Alone by assuming that everybody else attributes relevance to the status data, it becomes more important to us. Consequently, these scores40 do not reflect the social order, but they create a new one which then develops normative and political pressure.41 In the Western world, scoring power lies with private actors with secret algorithmic authorities,42 the social media platforms. In its present interpretation, „social media“ as well as the term platform are unthinkable without the existence of software.

Today, the concept of the platform is central for the use of software. The idea is not new and existed already before the invention of the computer. Basically, it is the exchange idea. The capacity of software platforms to directly mediate between clients and suppliers is extended to many areas: streaming of music is an example from the arts; projects around direct democracy, like the pirate party or liquid democracy, show that the slogan „kill the middleman“ is applicable anywhere.

The elimination of intermediate entities – be it wholesalers, distributors or political parties – is often perceived as liberating and in many ways it opens up new perspectives.43 It is the strength of platforms to decrease transaction costs and to create transparency. For Jeremy Rifkin, those platforms represent a great chance. He sees them as tools of empowerment for civil society and great potential arise from sharing property and the elimination of transaction costs. (Rifkin, 2014)

Like any technology, software holds potential and dangers and it is not my goal to write a technology impact assessment for software. Instead, I will focus on the manipulation of free will and conditions of decision-making and evaluate if a significant restriction of autonomy is evident or possible.

Thus, when evaluating social media, it is important to stress that this falls under software-mediated communication platforms. In the meantime, every part of the interaction is controlled by software: from input, to processing, and advertising.

Points of criticism are here, first the design of the interface, second the limitation of communication through the determinations and the structure of the medium, and third the processing and delivery of information through not openly accessible algorithms.

I will touch upon the first point later and the second has already been discussed in the chapter „Software“: The limitation of characters to a few on Twitter, the standardization of Emojis, the all too simple like-button, the prefabricated layout of Facebook and Instagram, the structure of the „timeline“ and of the newsfeeds – all this facilitates on one side the use but simultaneously standardizes communication taking place about it.44 The delicate aspects are the normative and moral determinations that are made and the cultural norms and values that are transferred.45

The categorical restriction of nudity on US-based software platforms makes it impossible for European users to pursue a more relaxed form of social interaction as seen from their perspective: Nudity is simply removed. In my opinion, this example demonstrates that not only users shape the so-called „net culture“, but that normative values are also structure-dependent. Renren.com, the largest Chinese social network has very different rules and algorithms than Facebook.

The third point again pertains to ADM, which in this context makes seemingly trivial decisions, namely which articles are shown to which user or group of users and in which priority. Although originally developed to make the process of reading and browsing articles easier, using the psychological premise that we prefer to interact with people of similar convictions and similar life circumstances and that we tend to reject the foreign, in the meantime, the ranking algorithms create so-called „opinion bubbles“. (Greenfield, 2017)

The interest of social networks is to construct their users as persons who appear as coherent as possible based on the most data possible that can be extracted from a user. Likewise, as humans we have an interest in living coherent life scripts. In a modern society, this does often not come easy, since the roles we have to assume under different contexts and life circumstances of our existence are so diverse. (Montag, 2016) However, we have learned to appreciate that we can accentuate different sides of ourselves if we wish to do so and that a cross-context social control of the coherence of our life script does hardly exist anymore.

It is essential for the value of the data used in targeted advertising, called „targeting“, that the digital profiles can be connected with real persons.46 The value lies in the surveying of a person to a maximum degree and several online identities are therefore irritating when it comes to the digital reconstruction of a person.

Consequently, Facebook encourages its users to see their public image as an inseparable part of their identity by using, among other things, the timeline feature. All users receive a uniform self that is expressed as a coherent narrative starting at birth. This fits with the narrow conception of a self and its possibilities, as seen by Facebook founder, Mark Zuckerberg: „you have one identity. The days of you having a different image for your work Friends or co-workers and for the other people you know are probably coming to an end quickly“. He also argues that „having two identities for yourself is an example of a lack of integrity.” (Kirkpatrick, 2011)

Coherent identities also lead to generally predictable decisions. Our deliberations are usually not arithmetic in nature and therefore not calculable. The human being is not a Turing-machine, human problems are often non-deterministic and hence not solvable using algorithms, according to Julian Nida-Rümelin. (Nida-Rümelin, 2011) Since this is the case, we allow other elements to impact our considerations, like the wish to lead a coherent life. Just like we can judge our friends because we know what is important to them and which boundaries they would not breach, even if they would get an advantage out of it, so are tracking mechanisms also trying to judge us.

A single interaction of a consumer, for example visiting a website, could cause a plethora of data flows and a number of hidden events across many different parties. Profile data that is distributed over several services, is dynamically connected and combined in order to make many automatic decisions over the heads of human beings, both trivial and consequential every day.47

In addition to platforms and ADM, a new component comes into play here that would be unthinkable without software, namely the statistical analysis of great data quantities. The buzzword big data harbors many dedicated steps of data collection, processing, analysis and application. Big data adds a probabilistic aspect because big data analyses are not about causal explanations but statistical correlations. In 2008, Chris Anderson conjured the end of the traditional causalistic science – soon there would be enough data and the tools for the statistical processing that the world’s context could be interpreted as correlations and not causalities. „Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”48 (Anderson, 2008)

Big data is not so much about precise numbers but rather probabilities and correlations. Correlations are not universally causal, the interpretation is often difficult, and the possibilities for errors are numerous. Therefore, predictions should be taken the same way like the weather forecast but instead in the course of data processing are often interpreted as binary and exact.

Correlations as input parameters for algorithms are common and together with all the collected data tracks that without our conscious doing create a digital identity, they are the basis for predictions of individual and group behavior.

Nobody exactly knows the accuracy measure – this remains a company secret. It can be assumed that the advertised accuracy is not achieved in reality. A self-test on applymagicsauce.com was alarming – the analysis was so far off and the fear was not that the calculation was inexact but that it was flawed.49

At least since WikiLeaks, applications for the surveillance of persons, objects, or entire states are part of the public discourse, however, the extent and exact mechanism remain still unclear.

In general, government security measures are increasingly supported by a combination of big-data and ADM-systems, be it with predictive policing or when deciding if someone has to undergo especially strict screening at the airport.

I will also leave topics about the military application of software for reconnaissance about the direct use of cyber weapons of any kind to the development of autonomous weapons systems (so-called LARs or LAWS) deliberately out of the discussion as well as the unsolved ethical issues in that context.50 Likewise, vulnerability of physical infrastructure due to the dependence of control software as portrayed in best-sellers of the strain like „Blackout“, but also several scenarios for digital warfare shall not find mention here. Although I notice a strong influence on the human being, and under disastrous conditions autonomy and freedom of action are possibly restricted, these situations are, however, not subject of this work.

The software used in schools and institutions of education is far-reaching and, in my opinion, a legitimate topic of discussion.

In very few cases, science does not depend on software and many research results are the product of software. Brain research is maybe a humorous example that is fitting for the next chapter about the error rate with the use of software. In 2016, a wave of disconcertion went through the scientific community because when visualizing brain activity with the fMRT-method, the use of standard software packets was not thorough enough and consequently the results of studies were distorted. The results showed brain activity where there was none – the long chain from sensory perception to visual output is so complex and intransparent for many neuroscientists using the system that it made a plausibility check impossible. (Charisius, 2016)

We often forget that software visualizations do not represent an analog image like that of a telescope but instead a product of a long chain of software decisions.

Interfaces

“In the electronic age we wear all mankind on our skin. We wear our brains outside our skulls and our nerves outside of our skin.” McLuhan
(McLuhan, 1994)

In retrospective, many statements by Marshall McLuhan seem as if he had precisely predicted the further development of electronic media. He writes in „Understanding Media“ in 1964:

“By putting our physical bodies inside our extended nervous systems, by means of electric media, we set up a dynamic by which all previous technologies that are mere extensions of hands and feet and teeth and bodily heat-controls — all such extensions of our bodies, including cities — will be translated into information systems. (…) But there is this difference, that previous technologies were partial and fragmentary, and the electric is total and inclusive. An external consensus or conscience is now as necessary as private consciousness. With the new media, however, it is also possible to store and translate everything; and, as for speed, that is no problem. No further acceleration is possible this side of the light barrier.”(McLuhan, 1994: 57-58)

Here, he already anticipates important aspects of the current digitalization criticism: ubiquity, the shift in the relation of privacy and public sphere, the externalization of cognitive processes, the almost-merging of human being and software, the networking and the extension of the human being through technology.51 McLuhan perceives the latter still in more neutral terms as extension rather than the improvement of the conditio humana in the sense of human enhancement.

If we wanted to focus on the interaction between human being and software more extensively with regard to influence, then we have to look at the existing interfaces and try to understand their form and function on the one hand and tackle the philosophical aspects of this interaction on the other.

If there should be a relation between both aspects, then it has to be tested for its relevance as well.

Hence, how do we interact with software? A direct wiring of human nerve cells with electronic components is possible and is used in the medical and artistic domains. Ultimately, in these cases as well, electronic voltage pulses can’t be distinguished from those of a touch display. The connection, however, occurs „within us“, so we experience the controlled component more as our own organ than a computer keyboard. These special cases, whilst fueling the debate about cyborgs and the raised ethical questions, should not be considered here and the focus should be on the most widely used interfaces: Smartphones (multi-sensoric through vibration, display, and speaker), classical computers, and voice assistants like Alexa by Amazon. The principles employed here are also applied to the many displays in household appliances, machines, and cars.52

While early computer interfaces were either command lines or metaphors, current surfaces have developed their own esthetics and visual language which is a lot less aligned with metaphors.

Consequently, usability did increase massively and cultural preconditions have been reduced to a minimum.53

A separate discipline in psychology is dedicated to researching human-machine-interaction, proposing that the physical condition of the interface itself seems to influence already how we experience digital contents, remember them and interact with them. This is not surprising when we consider our life-world experience: obviously, we experience our environment differently depending which senses and which part of the body is involved exploring: It is categorically different to us to look at a tree or to touch it. Nobody would categorize touching a tree with the own hand or with a stick as an equivalent experience. Therefore, why should it be the same to click an object on a screen with a mouse or touch it with our own finger? Of course, the object itself does not change but this is also true for the tree. What is new for us, is that we can execute the exact same manipulation of an object in the digital world. This being determined by the possibilities that the software can represent and not by the physical condition of the interface, even if we sometimes perceive it that way. It is easier for us to displace the tree with the finger on the screen than with a mouse. The virtual movement itself, however, is executed by the software, and for the software it is irrelevant if the impulse comes from the touchscreen or mouse. For most people a physical feedback is important, even when it is a simulation. Vibrating gaming controllers and Smartphone keypads are popular. An example described in the literature involves the control of commercial aircraft, where Airbus and Boeing went into different directions.54

Studies support the assumption that there is a difference between mouse and touchscreen. Touching an object on a screen is a direct visual metaphor for the act of touching the content itself, similar to touching an object in the real world, unlike the indirect touching via mouse or trackpad to control screen content. When imagining to touch an object, image processing in the brain is activated which in turn enhances the mental simulation of the behavior connected to the object. (Schlosser, 2003) Essentially, a simulated or imagined touch produces effects that are very similar to the actual touch.

Moreover, the interactivity of objects enhances the animation of mental product images (Schlosser, 2006) and the liveliness of images enhances the perception of ownership. (Brasel & Gips, 2015; Elder & Krishna, 2012) The direct touch of content on a screen is a direct analog to interaction with objects in the real world.

Studies with mice in a virtual environment suggest, however, that the bandwidth of activated brain areas is incomplete as compared with movement in the real world. The interaction takes place in a sort of „in-between-world“. (Aghajan et al., 2014)

Devices with touchscreen are also to a considerable extent more directly associated with the user’s „extended self“ (Hein, 2011) and more substantially seen as part of the own personality than a laptop or traditional computer. Even if someone would not want to go quite that far, considering Smartphones and tablets as an extension of the human body, the relationship with these devices seems to be stronger than with TV-sets or desktop computers.

Building on these empirical indications, Brasel and Gips speculate that we tend to trust information perceived on Smartphones more, either because the source of information is closer or because it resembles a partner who we share a stronger bond with. Furthermore, their experiments suggest that while using touchscreens we value emotional and „tangible“ attributes more than abstract attributes like prices or rational attributes like tests and reviews. An even stronger effect can be seen when the touchscreen device is directly held in the hand. (Brasel & Gips, 2014)

This effect of blurring physical boundaries has interesting aspects when it comes to the formation of our decisions:

The closer and more directly we interact with software, the more our priorities shift towards emotional parameters. Therefore, I find it plausible that this impacts the evaluation and prioritization of our reasons.

Smartphones, tablets, „wearables“, Smartwatches and Google glass are not the counter-development to techno-centrism, as they are often portrayed. The slogan of them being more human than complicated interfaces and the large screens of desktop computers, does in my opinion not apply and rather misses the core of digitalization. These devices make it possible in the first place to open up many of our spheres of life for software solutions. In addition to free apps and always being interconnected, they facilitate obtaining step-by-step navigation instructions and algorithmic recommendations for the next lunch. As body sensors, they allow storing our location, mood and physical condition in the Cloud in order to obtain instructions in return. They are interfaces in the true sense of the word because they deeply intrude in our lives and are the portals into a digital world. Hence, their designers aim to design them as appealing as possible. It is not about selling them as a product but rather these devices are primarily advertising media. At a conference in Atlanta, a presenter was referring to them as “decoy” (Logg, 2017). For the critics of digitalization, they are primarily instruments of manipulation.55 (Carr, 2017) The notion that Smartphones would be a „repository of the self“ (Wegner & Ward, 2013) seems an exaggeration to me. Their influence and significance for our lives seems impossible to ignore as the mere presence of Smartphones suffices to distract us.56

Modern interface design tries to work with the knowledge of behavioral psychology and hereby to deliberately incorporate the human peculiarities (I would like to deliberately not call them weaknesses, since they prove very valuable in other contexts) in perception and processing of information from our surroundings, as well as the way we make decisions. The primary objective is to facilitate the use and to process information with a minimum of cognitive burden. So-called information graphics or cockpit diagrams allow us to recognize facts and circumstances at one glance and to make lightning-fast decisions. However, design is also used to capture our attention, to encourage decisions that we probably would not have made given careful thought, or to „nudge“57 us to lead a better life. By then we are dealing with paternalism.

The most familiar design principles are „ease“, „habit forming“ and „digital nudging“.

Ease and convenience

"Those who embrace ease may not be able to move past it" Dilger Bradley (Dilger, 2000)

Bradley Dilger compares the concept of „ease“ with an ideology and idea of self-sufficiency that was a goal in itself. Since consumers increasingly rely on practices and objects that are easy to use, convenience is becoming more and more important.58 „Ease of use“, often called „convenience“, would also play a role in competing for the digital human’s attention – only something that is somewhat easier in its use than the competitor’s product was going to be used. „Ease“ was always a selling point:

“Ease is never free: its gain is matched by a loss in choice, security, privacy, health, or a combination thereof. This is well represented in deployment of a large quantity of Internet software”. (Dilger, 2000: 4)

In theory, users can focus more on the proper tasks again instead of the use of the device. The reason for it is a mental process called „cognitive offloading“, which is the outsourcing of thought processes of memory performance in order to be able to concentrate on other things. Based on the assumption that we have a limited attention span and hence a limited capacity for conscious thought, we ought to be unburdened. We are all thankful for anniversary and birthday reminders, so we don’t have to recall them ourselves anymore. The price, however, for not processing some things in our own consciousness anymore, seems to be that this knowledge does not leave any trace in our memory and thus is not accessible for future operations. (Heersmink, 2016) In controlled experiments, there are some indications that we are less likely to remember and remember well when we believe that the information would always be accessible as compared to when we assume we had to memorize it or otherwise it would be lost. (B. Sparrow, Liu, & Wegner, 2011)

The outsourcing of information is not a critical process per se, when we presuppose that the outsourced information was to remain accessible. Furthermore, when using outsourced information, we don’t seem to distinguish between memorized or retrieved information. Many participants in experiments were convinced that they did „know“ the information themselves and thought that they cognitively performed better than they actually did. When we are unable to distinguish between information that we have worked for ourselves and information that we simply retrieved via software, then the question about our vulnerability to be influenced and our ability to make our own judgement arises. Our own knowledge is always stored in a context and linked with our experiences. There is much evidence suggesting that thinking requires our own knowledge. Of course, we investigate, question, or glean, however, „the art of remembering is the art of thinking“ (James, 1899, Chapter 12: Memory), as the American psychologist and philosopher William James already stated in 1892.

We can only consider and think anew based on what we have at our disposition in our thoughts already. Raw data is a memory without a history. Van Nimwegen’s thesis „The paradox of the guided user“ offers some indications. Two groups are given the same tasks, one with a very user-friendly „thinking“ software interface with maximum cognitive offloading and the other with rudimentary, complicated user interface. At first sight, the results seem counterintuitive and to go against the trend that postulates interfaces that are as simple as possible and cognitive offloading as much as possible. Actually, the group with the complicated interface tackled the tasks better, was less distracted, and could apply the newly acquired abilities more readily to new assignments. (van Nimwegen, 2008)

The assumption seems plausible that the quality of our power of judgement and our intellectual capacity would benefit from its use and not improve by just laying idle.

When we hand over our ability to think rationally and to remember to software, then we sacrifice our ability to process information into knowledge and a part of the epistemic basis for our autonomous decisions.

Digital Nudging

„It is not the consumer’s business to know what he wants." Steve Jobs

The human being would not be able to make optimal decisions – be it due to cognitive biases that are a part of the conditio humana, biography, environment, or genetic predisposition. Therefore, happiness often requires some help. This is a fundamental idea of paternalism.

Nudging59 as the seemingly most gentle approach does not choose the path of convincing or the direct control through regulations or prohibitions. Instead, nudging explicitly uses these psychological shortcomings to achieve a certain behavior.

Libertarian paternalists, as they call themselves, recognize our spectrum of cognitive pitfalls and recommend to influence human behavior not through rational argumentation alone, as this is too often ineffective, but through the use of the own biases in order to affect more advantageous decisions. (Thaler & Sunnstein, 2009) They argue that they would not exclude any options and thus leave people with the freedom to make bad choices. Consequently, they respect human autonomy. At the same time, people more likely make decisions in favor of the advantageous result based on nudges introduced in the moment of choice. (Sunnstein, 2015) The staging of the environment and the moment of decision with all the presented options and their presentation becomes more significant and is the mission of so-called „choice-architects“. Sunnstein clearly doesn’t go as far as Sarah Conly, who often advocates for a „coercive paternalism“ in the sense of mandatory measures. (Conly, 2013)

She criticizes nudging as relatively useless and the worst possible compromise between compulsion and freedom:

“I argue that insofar as libertarian paternalism is manipulative, it fails to capture the intuition that we should respect people’s capacity to make rational choices; at the same time, it fails to give us the results that we want, because people can still have the options to pursue bad courses of action – they can still smoke, or run up intractable debt, or fail to save any money. It gives us, in a sense, the worst of both worlds.” (Conly, 2013: 8)

Even before the term nudging was coined, many of the influence mechanisms had been in use in marketing and sales. Indeed, Sunstein/Thaler did not invent the methods of nudging but have borrowed them mainly from the private sector and have proposed to use them in the public context as well.

The psychological fundamentals for the work of „choice architects“ (Thaler & Sunnstein, 2009) lie in the concept of „bounded rationality“ (Simon, 1959). Heuristics and prejudices60 influence our decisions. (Kahnemann, 2011) Likewise, rules of thumb play a big role in our decisions. (Gigerenzer, 2007)

These mechanisms have certainly proved useful, simplifying decisions for us in many situations and allowing us to concentrate on the most important facts and to decide faster. (Evans, 2006) Sometimes, however, they are the source of systematic errors and biases.61 (Kahnemann, 2011)

Thaler himself describes six mechanisms:

The first being „incentive“ is simple – here it is about rewarding the user for a decision he/she is making. „Understanding/mapping“ involves the representation of information in a way that the user can easily categorize it in the own world of experiences. In this context, info-graphics can serve as nudges when consciously deployed. A price can be perceived as low or high, depending on what it is being compared to.62

„Defaults“, the third mechanism, was already mentioned in the example of organ donation. Red or green lights, smileys and similar symbols that are shown in the course of the user interaction as an expression of expected or non-expected behavior, are examples of „giving feedback“. The fifth mechanism, tolerance and „expecting error“ in the interaction and avoiding it through process design, helps to keep the user in the loop and to avoid negative experiences. At first glance, „structuring complex choices“ seems a plausible pedagogical formula that many could subscribe to unchecked. The question that arises with the application is which nuances of choice are sacrificed in the name of simplification and if the simplification prefers certain options.

The behavioristic concept with libertarian hues fits well in the deterministic and causalistic milieu of software development in which the human being and his behavior present algorithmically predictable coordinates for the human-machine-interaction. As a consequence of human behavior conceptually becoming machine behavior, the interaction is significantly easier to model.

Weinmann et al. define digital nudging as „the use of user interface design elements to guide people’s choices or influence users’ inputs in online decision environments“ (Weinmann, Schneider, & vom Brocke, 2015).

Standard options that are activated for tipping in restaurant-apps (Square) have increased tips (Carr, 2015), opt-out-mechanisms for newsletters are bothersome and at the same time very effective. In the meantime, booking sites of budget airlines are so notorious that they had to be regulated in part for consumer protection reasons. Not only are they full with standard options for additional services, but they intentionally increase the cognitive load to make the real pricing schedule less transparent.63

Nudges work even better on mobile devices. Due to a more intuitive use with the finger or language, the emotional distance is less than with a traditional PC, as I have discussed earlier.

Experiences suggest that digital and analog nudges are perceived differently and moreover work to different extents: People tend to react differently to digital nudges compared to offline-nudges.64 While the reasons for such behavior differences are not fully understood yet, potential explanations focus on the mistrust of web users that has been stoked by the excessive nudging attempts on some websites. The human capacity for learning and judging are potentially severely underestimated by behavioral psychologists and interface-designers.

Moreover, the basics of nudging are far from being as solid as it is often presented. Dirk Helbing is warning that the perception of a clear psychological mechanism is being created and that one should consider that 60 percent of scientific results in psychology are not reproducible. (Helbing, 2015)

Therefore, he advocates for more scientific evidence, transparency, ethical evaluation and democratic control with Big Nudging. The measures would have to bring statistically significant improvements, the side-effects would have to be acceptable, the users would have to be informed (much like a medical package leaflet), and the concerned parties would have to get the last word. (Helbing, 2015)

Habit Forming

Alexa! Print my thesis...

Additionally, contemporary software interfaces focus on the so-called „habit forming“, which is the sustainable influence on user behavior. (Eyal, 2014) Motivation might be on the mind of the user who seeks to modify his behavior with the help of software, like for example lose weight, live healthier, drink more water, or quit smoking.65

In many instances the objective is to captivate the user’s attention as long as possible, especially with social media, media streaming, and gaming. The language assistants Alexa, Cortana, or Siri intend to go even further. For their designers, the declared goal is not just to make them indispensable parts of life but also to shape decisions – especially buying decisions – and turn them into habits. „Alexa, order catfood!“ is the almost ridiculous sounding gateway version. „Alexa, what’s new?“ is an assignment for which a more complex set of questions on how news are chosen and analyzed are raised.

In order to awaken the user’s interest for a product or service in the first place, it would primarily be essential to create so-called „hooks“66, i.e. (fishing) hooks: „experiences designed to connect the user’s problem with the company’s product with enough frequency to form a habit“ (Eyal, 2014) as Nir Eyal explains. Designers all over the world follow the four-step model according to Nir Eyal:

First, there needs to be a „trigger“ which animates us into action.67

The next step is the „action phase“ – which means the simplest action possible in anticipation of solving the existing tension. The simpler and easier, the better and the less „cognitive interference“. This is also the secret to the success of language assistants68: the only thing easier than asking „Alexa“, would now be to think about the next step.69

Following the user’s action, there has to be the reward: The internal trigger which is the unpleasant feeling that led to the action in the first place, must be resolved. And even if it was just Alexa calmly confirming that the light is being switched off. In order for this to become more than a regular „feedback loop“, something special should be presented that is not expected or at least not in this form – a variable reward much like in a „Skinner box“ (Skinner, 2014).70

In the last phase, called „investment“, the user is asked to give something – usually exactly what the designer really wants. The user is often asked to enter data, to like something, or to upload a picture. As long as the brain was still bathing in dopamine, there is a price to be paid.71 (Austin, 2017)

The described methods are central to „persuasive computing“, a development that proceeds from programming computers to the programming of human beings. (Helbing et al., 2015)

The techniques listed so far feature many similarities. The common basic intention is to appeal to human affections and emotions and to shift the decisions rather to the pre-deliberative level. In the ideal scenario for the software designer, the person does not even weigh the options but follows impulses that the software and the respective interface are setting. Instead of evaluating the reasoning, making decisions and acting accordingly, as described by Julian Nida-Rümelin (Nida-Rümelin, 2005, 2011), we tend to act in automatic system mode according to the model by Daniel Kahnemann and Amos Tversky. (Kahnemann, 2011)

These models are not contradictory and we use both in everyday life. In my opinion, it is very unlikely that decisions under the influence of „nudging“ could be considered intuitive decision-making as described by Kahnemann and Gigerenzer, meaning based on our experiences and our internal heuristics. Although, if the interface designer did a good job, we receive a „reward“ each time we have acted according to the designer and thus reinforce our own heuristics that we make the right decision following the recommendation, we tend to dissociate the true effect of the decision from the act of decision-making in ourselves.

The techniques of „affective computing“ go even a step further. It is the term for the discipline focusing on the development of systems that are supposed to recognize, interpret, process, and simulate human emotions. „Affective design“ addresses the interfaces between humans and such affective computing systems. These interfaces are supposed to correctly measure human emotions and vice versa also trigger the wanted human emotions.

Aside from the critique that here emotions are seen as an objective, measurable value rather than the subjective experience of a person, the field is making big progress. Depending on availability, physiological measurements are recorded, such as blood pressure, heart rate, skin conductance, and the like.72 The recognition of emotions based on language patterns is well advanced and is in the meantime standard in many call centers.73

Likewise, keyboard typing patterns can deduce emotions in an increasingly reliable fashion.74

The „Facial Action Coding System,“ known from many detective series, as basis for the automatic recognition is used in emotion recognition using faces.

This software is used at airports, in the service sector, but also for personnel selection. However, its main application currently is in the gaming industry.

Gamification

"The thought process that went into building these applications, Facebook being the first of them, ... was all about: 'How do we consume as much of your time and conscious attention as possible?”

Sean Parker, Ex-President of Facebook (Parker, 2017b)

The collaboration between behavioral psychology and software development has flourished especially well in the gaming and betting areas and is viewed rather critically due to the active exploitation of a certain addiction potential. (Christl & Spiekermann, 2016: 20)

Experiences made in the gaming sector and techniques developed with much effort are easy to transfer to other areas of application. This use of elements from the gaming world in contexts that are foreign to gaming and with the intent to influence user behavior, is called „gamification“. (Deterding, Rilla, Nacke, & Dixon, 2011; Whitson, 2013)

The objective is to increase user motivation in order to intensify their interaction with the applications and to exhibit desirable behavior. Gamification helps to make contents and processes more appealing for users and to bind them to an application for longer periods of time by insinuating a clear path to mastering the app and by strengthening the subjective impression of user independence and freedom of choice. Gamificated applications take advantage of human curiosity to take part in games and in this way to accomplish activities that usually are considered boring, like for example tax declarations, cost reports, or filling out surveys, shopping, and much more. (Deterding et al., 2011; Wikipedia, 2017a)

While for a long time, gamification was limited to marketing and personnel development, design elements from gaming are in the meantime transferred to many areas. As with design thinking and other innovation methods, „foreign“, respectively „new“ elements are intentionally introduced to generate ideas and to more strongly involve and motivate the participants at the emotional level. Hardly any software application in the consumer sector can forego gamification elements.

The potential for influencing human behavior through playful elements is very high because they strongly rely on surprise, curiosity, and competition – all of which are emotionally highly charged factors.

Again, in order to better understand which mechanisms come into play, I would like to mention some examples75:

1. Feedback

These mechanisms reward users for their performance, primarily in the form of points (or equivalents that can be collected), levels (as a sign of improved mastery of desired behavior), badges (an easily visible social aspect of gamification is to reward users for certain behavior and to simulate a status improvement), bonuses (extra rewards for the completion of a number of actions that mimic premiums gained at work) and notifications (to inform users about their status changes, including earned points, badges, and bonuses).

2. Indicator Mechanisms

These mechanisms define a relative user position in time or in relation to other users, as for example countdowns (convey a feeling of urgency to users in order to increase activity or to prompt an action by the user who did not intend it originally). Progress indicators help the users understand where they stand in the course of the process and what is still waiting for them.

3. Rankings

Lists of top performers in certain areas show the users their rankings relative to those that are closest to them or random other groups.

All these mechanisms are proven in real life and tested in social practice.

A main point of criticism of gamification refers to the data collection and analytical practice. Jennifer Whitson recognizes here „a new driving logic in the technological expansion and public acceptance of surveillance" (Whitson, 2013).

My unease refers to a suspected effect of emotionalization of decisions.

While it can be a pedagogically meaningful measure to use playful elements to foster motivation and learning, I find their abundant use for an emotionalization of daily processes of our life circumstances as gravely exaggerated and worthy of a discussion.

One of the reasons why software design increasingly focuses on emotions lies in the fact that human decisions are basically open and that we often make decisions that are unexpected for others, especially when we weigh reasons for our decisions. There is a higher probability to induce a certain behavior with simple stimulus-response-chains than to convince someone. This is also shown in research about humans dealing with algorithmic decision making. The question if humans trust algorithms has less to do with the quality of the algorithm or its result than the context. (Logg, 2017; Yeomans, Shah, Mullainathan, & Kleinberg, 2017)

There is no clear indication if or under which circumstances people trust algorithms. On the one hand, studies point out that we tend to be more honest facing a software simulation of a doctor, but on the other hand that patients prefer a diagnosis in person over a computer-generated one. (Promberger & Baron, 2006) Again, other experiments seem to prove that people take recommendations by decision-making systems with skepticism which in part can have fatal consequences76 (Bazerman, 1985; Dawes, 1979). When it is about personal taste, we seem to trust our friends more than a software recommendation. (Yeomans et al., 2017) However, where logical problems are concerned, people indeed seem to rely more on software. They also use search engines for knowledge that they actually have at their disposition already.

We generally tend to rate our own assessments higher than those by others and moreover to incorporate the advice by others into our own assessment. (Logg, 2017) Experiments concerning human-software-interaction are sometimes oversimplified and therefore basic misconceptions are made, like for example confusing self-perception with human assessment.77 When the self is involved, people seem to react more dismissively. We think highly of our own decisions, as studies about „overconfidence“ show. The phenomenon is called „over-precision“.

The reverse phenomenon happens when we rather ask google maps instead of a stranger, who would witness our lack of knowledge.78 (Logg, 2017)

This demonstrates that many factors flow into our assessments, serving as the basis for our attitudes and decisions.

Discussion

Awareness and free will

“The real problem is not whether machines think but whether men do.”

B.F. Skinner

As a precondition in the discussion if software endangers our free will and our autonomy, its existence must be recognized. In order to be autonomous, one has to have the ability to make their own decisions in the first place. This includes the ability to draw conclusions and to turn them into action.

As mentioned in the introduction, my analysis is based on a naturalistic underestimation. (Nida-Rümelin, 2005)

The human being can be subject to the laws of nature and nevertheless exert his role as initiator and author of his actions.

The question of free will which preoccupies philosophers, psychologists, and also neuroscientists, remains unanswered even after more than two thousand years of documented efforts. Especially with the dominance of computer sciences and neurophysiology, the debate has gained a new level of intensity in the last decades.

The naturalistic sciences operate on a predominantly causalistic foundation. The conception that events could not be determined by prior determining factors or events and also could not be purely random, is not compatible with the predominant understanding of science. Even when in the meantime, this science paradigm does not contain the strongly causal element anymore, it does not make the idea of a human will that is not clearly associated with measurable activity in the brain more compatible. Especially, since the questions about existence, nature, substance, function, and location of awareness are also controversial. The debate is tied to the discussion about the nature of software and the „theory of mind“79, the theory of the psyche of others, as it is understood by Daniel Dennett, for example. He argues that consciousness will be entirely explained in the future using neuroscience and cognitive science. (Dennett, 2017) Each process of consciousness may be associated with a neurological process.

For its part, informatics draws some of its assumptions from the „computational theory of mind“ and conversely it is based on the work by computer scientists, cyberneticists, and mathematicians. The widespread belief that our thinking would be (or had to be) structured like the ideal operating system of an ideal computer, to a high degree influences our thinking about ourselves and the machines that surround us.

Due to the spectacular success of the approach of developing computer and software analogously to the suspected functioning of our brain, thinking and research is more and more moving in that direction. The computational theory of mind is interpreted more broadly than merely a metaphor of the brain as a computer but it also describes mental states as „computational“, thus calculated. The brain would use similar mechanisms as a computer, whereby computers should not be seen as a physical machine but as a concept of the Turing-machine. (Dennett, 2013)

This point of view is very common among computer scientists and software programmers80. Based on this and functionalism, the idea emerges that essentially highly developed neuronal networks are artificial intelligence in the sense of human intelligence.81

The predominant analogy when analyzing humans and computers is progressively blurring the boundaries – we attribute agentivity and identity to machines and software, while by the same token, we consider ourselves as biological computers with a similar functionality to that of machines. Authors like Marvin Minsky, Ray Kurzweil, or Nick Bostrom go as far as being convinced that the human spirit could also exist independently from the body as software code on a computer platform82 or that software could develop a human-like or emergent consciousness. (Bostrom, 2003; Kurzweil, 2012)83

For neuroscientists like Gerhard Roth (Roth, 2003) and Wolf Singer (Nida-Rümelin & Singer, 2011) but also for philosophers like Thomas Metzinger, consciousness represents only a useful construct of the brain. (Metzinger, 2004)

Sir Roger Penrose, a physicist, criticizes the anticipation of an artificial intelligence based on software. He also suspects the physical home of consciousness to be in the brain and proposes a model for the interplay of brain and consciousness that is based on quantum mechanics effects. (Penrose, 2009) In a presentation in front of Google engineers, John Searle who is known for his skepticism towards the semantic abilities of artificial intelligence as expressed in his famous „Chinese room“-argument, put forward a possible quantum mechanics explanation for free will. (Searle, 2007)

Determinism was not universal and it would be a misconception that something functioning a particular way in one scale could be automatically transferred to another. In physics this would not be out of the ordinary. Quantum mechanics was the only form of indeterminism that we know of in science. It removes coincidence from epistemology and brings it into ontology – the existence of the universe would also be incidental. For Searle also, free will would only be possible with consciousness. (Searle, 2007)

For Thomas Metzinger, the vast majority of our thinking is a sub-personal process and not characterized by attentional or cognitive agentivity.

Originally, the ego would be a neuro-computational instrument, designed to take hold of and to control the body – first the physical and then the virtual. It would not only generate an internal user interface allowing the organism to better control and adapt its behavior but it would also be a necessary requirement for social interaction and cultural evolution. (Metzinger, 2014: 136)

Most events in the physical world are only events for it but an extremely small proportion are additionally actions, thus events that are caused through an explicit goal presentation in the conscious mind of a rational agent (Metzinger, 2014: 134)

When Wolf Singer states in an interview: „a person did what they did, because there were no other options in that moment, otherwise they would have acted differently“ (Nida-Rümelin & Singer, 2006), then he means explicitly that the preceding state of the brain had a causal effect on the action.

This idea of a complete determinism is not compatible with our everyday practice and goes against our experiences and intuitions. To imagine that this thesis work would have to be written exactly how it happens in the moment without me actually having any impact on it and the decisions I would make in the process would just be constructs and justifications ex post my brain, seems absurd to me.

In any case, however, we experience ourselves as acting subjects in a comprehensible, real environment.

Despite all divergence, there is agreement that freedom of will is not just a topic in philosophy, in the brain, or in our mind – it is also a social institution. The presumption that there is something like a free will and acting and the fact that we treat each other as autonomous agents, is reflected in the foundations of our legal system and the rules that govern our societies – rules that are based on the assumption of responsibility, imputability, and liability.

Julian Nida-Rümelin takes a clear position against determinism. For him it remains, like the naturalistic probabilism, an academic position because they are not integrable in our everyday practice (Nida-Rümelin, 2005: 41) and the convictions associated with them.

„Our interpersonal relations (…require that people are responsible for their actions, that they are not just objects of causal influence alone – neither those of physics, of biology, or neurophysiology, nor those of psychology.“ (Nida-Rümelin, 2005: 27)

Should we one day be forced to tell an entirely different story about what constitutes the human will or what it is not, it could impact our societies in ways that are unprecedented. If for example, imputability and liability do not really exist, then it is irrational to punish people for something they could not refrain from in the end. Retaliation and satisfaction would then permanently appear as terms from the stone age, as something we have inherited from the animals.

The only thing remaining human would be rehabilitation measures. (Metzinger, 2014)

Decisions

"Subjective certainty when it comes to the own future decisions is conceptually rejected. The outcome of the consideration must remain open, so a decision can be made in the first place. This statement is true in logical terms."

Julian Nida-Rümelin (Nida-Rümelin, 2005: 51)

Hence, we must assume that by principle we are free and that we accept this freedom as a precondition for our decisions and actions.

Julian Nida-Rümelin considers three things fundamental for decisions:

1. A decision marks the conclusion of a consideration
2. Before the decision is made, there is no determined outcome, thus it is „free“
3. A made decision translates to actions (Nida-Rümelin, 2005: 45).

Among the two questions that are raised here, namely about the roles of causality and available knowledge, the former has already been discussed. I keep assuming a naturalistic underdetermination of our decisions.

Before we decide, we weigh the reasons. We ask ourselves which options for action are available and we reflect which consequences they would have and which option would be more aligned with our convictions. Probably we interrogate our inner motives and desires but the reasons for a decision amount to more than desires or inclinations. (Nida-Rümelin, 2005: 46)

Obviously, desires and inclinations play a role and influence the decision making. Reasons are either practical reasons when they advocate for or against an action or they are termed theoretical reasons when they advocate for or against a conviction. (Nida-Rümelin, 2005: 46)

Practical reasons control desires, hopes, intentions, etc. (conative attitudes) and the theoretical reasons control convictions (epistemic attitudes) of a rational84 person.

The weighing of reasons is a sort of inner argumentation.

„Judgement and action of a rational person is guided by practical as well as theoretical reasons. The person tries to make the entirety of theoretical and practical reasons coherent and, being confronted with options for action and assumptions, to absorb this as aptly as possible. He/she decides against a course of action if the consideration of practical reasons was negative, just like they reject an assumption if the consideration of theoretical reasons was negative.“ (Nida-Rümelin, 2005: 48)

This also clearly reflects that reasons are not reasons in a causal sense. „Reasons are not causes, there is a fundamental difference“ Searle also agrees. (Searle, 2007)

We allow ourselves to be affected by reasons, as Julian Nida-Rümelin puts it, we don’t search for explanations.

Likewise, our desires and inclinations are not reasons. Motivating intentions are the result already and simultaneously the mental representatives of accepted reasons. (Nida-Rümelin, 2005: 55)

We implement our decisions via actions. One could also state that decisions are intentions85. Our actions are guided by intent. When we do something with intent, then we have reflected upon it and it is not just mere behavior in automatic modus of system 1 according to the model by Daniel Kahnemann and Amos Tversky.

In order to call an action rational, there must be reasons for it. In this respect, decisions are intentions that precede actions and conclude a consideration – a deliberation process.

We control our actions in conjunction with intentionality. Behavior is part of the process and if indicated it is modified or stopped. This process which is physical and mental at the same time, is hard to imagine and yet it is not conceivable in any different way – acting without this control would not be perceived as acting (and consequently we would not feel responsible for it).86 (Nida-Rümelin, 2005: 58)

Often, the counter-argument for this model is that it does not correspond with the reality of life. This kind of consideration would be too complex, too cumbersome, too slow and hence too far-fetched from our practical experience. This model would rather reflect the normative ideal of the rational human and not what we do on a day-to-day basis. This would be much „dirtier“ and only in exceptional cases happen due to the evolutionary compulsion of the brain to save energy because otherwise we would not consciously consider but follow known patterns without the consideration of reasons.

Even this objection is not a fundamental contradiction because the described process of consideration either happens very quickly due to the variables around the decision being so clear – at least for ourselves – that the reasons are obvious or it is not about acting but rather behavior.

The latter possibility limits the image of our autonomy only very gradually as it does not question our fundamental ability of consciously considering reasons. However, for our interactions in society we also have to assume in those cases where we „automatically“ follow established patterns, that there was a consideration process and hence full responsibility comes to bear.

This keeps applying to borderline cases. Who is not familiar with the situation of cognitive overload when due to overstimulation, fatigue etc. there was not enough time for a reflection and we reacted „from the gut“.

Often, we realize that a so-called conscious decision was unconsciously motivated, which is not a problem of free will in the sense of the philosophical issue but very well ethically relevant.

I conclude from what has been mentioned so far that neither software nor another interface that does not directly prevent our ability to be affected by reasons, impact or endanger our free will.

We don’t consider our reasons in a vacuum but on the basis of our knowledge and experiences, under certain psychological, physical, and social conditions and always embedded in a societal context.

These aspects flow into our decision-making process and for all three of them I see the potential of influence by software.

Software and the Epistemic Basis for Decisions

„The influence of knowledge on our mental state and hence the influence of the consideration of reasons on our motivating intentions, consequently knowledge-based decisions, draw a basic line of predictability of our actions.“ (Nida-Rümelin, 2005: 65)

We can never state with certainty how a decision will play out, either because it is about real random processes or due to epistemic imperfection (Nida-Rümelin, 2005: 58).

If in the modern world we basically apply a new layer of sensory organs and filters between us and our environment, what is the base of knowledge and awareness for our decisions?

One of my main arguments in the software discussion is the broad influence on the epistemic basis for decisions.

If we assume that we never can know with a hundred percent certainty, what implications our decisions will have and we don’t want to consider the result of our actions for a moral evaluation of our decisions, then we make our considerations based on the existent epistemic basis. Exactly here lies the intersection with many software applications that promise us access to all the knowledge in the world as well as lightning-fast and objective decision-making support while also promising to make our decisions more informed. When we have to/want to decide (I include „want“ here because who is not guilty of finding decision processes to be something to find pleasure in and quite the opposite of laborious and cumbersome, giving us also the sense of autonomy and power) on a restaurant for dinner, then we love to turn to apps that present us with choices. This selection is of course not an accurate reflection of the possibilities. The selection already limits the possibilities and each application of a filter – be it manually or already built in – adds another restriction. The key difference here is if I manually apply this restriction in the interface or if the software performs this restriction of possibilities based on the analysis of my behavior (better: analysis of stored data that was generated by my behavior).

In most circumstances, we will evaluate the decision based on the quality of the food, the appropriate price and if we spent a nice evening with friends. Nothing here is causally associated with the recommendation and nevertheless, we will include it in the next selection process as a form of feedback information.

In the meantime, many of our daily decisions are made online: shopping, booking a trip, ordering books, reading newspapers and news, communicating. All these processes are software-controlled and harbor the key problems that I have presented above. Moreover, some of these processes are construed to present us the basic information and knowledge about the products/services online. And with some, this already applies to the presented selection we choose from.

For most applications, the fact of selection via software is discreet at interfering with the issue of our autonomy and freedom of decision making.

The width and scope of software support are relevant. The problem has a dimension of principle and scale.

Long-term, the modification of our base of experiences probably changes our attitudes and does not leave our reasons unaffected.

Each piece of information via a software interface is contaminated and proofed with values, ideas, intentions, and premade decisions, as I have demonstrated already. Experiences we make are always shared. Decisions are made for us in advance, each piece of information is already filtered. There is no immediacy of the experience and of life.

This can also be seen in a positive light – software can protect us from oversupply or undesired experiences, much in the sense of a filter sensory organ in the flood of information and communication. This is precisely the argument of social media platforms, when presorting for every user individually. We all value communication software that can reliably distinguish between important and SPAM-messages.

As long as I am aware of what software is doing and I can set the parameters, I definitely envision a place for software as a protective shield in a digital world. To operate within the info-sphere (Floridi, 2015) requires in my opinion a software agent on your side, especially to recognize and evaluate the many attempts to influence us.

Software and the Immediate Circumstances of the Decision

„what is chosen often depends upon how the choice is presented.” Johnson et al. (Johnson, Shu, Dellaert, & Fox, 2012: 488)

Going beyond our knowledge and experience base, software can also have an effect on the immediate circumstances under which we make decisions.

As I have demonstrated above, we can assume that the way we are confronted with decision-making can have an impact on our decisions. The actual impact is a matter of debate and it seems exaggerated to claim that designing the decision-making situation was the crucial factor, overriding all other reasons. Since decisions can be understood as a one-time event in a permanent consideration, I find it plausible that the influence factors present at the time of the decision play a role at a conscious, rational level as well as a more spontaneous level. I have described the various mechanisms that try to interfere in the situations of decision making in a way that leads to a higher probability to provoke a decision of a certain kind. However, I attach more criticism to the mechanisms that I described at length in the previous chapters, designed to prevent us from making a rational and conscious decision.

I call it coaxing instead of convincing.

Julian Nida-Rümelin proposes a limit for the predictability of our decisions. However, it is exactly this predictability that needs to be intact in the human-machine-communication because this is the only way to include it as a calculable parameter.

One of my points of criticism is that under the influence of behavioral psychology and neuroscience, convincing other people does not seem to work primarily with arguments anymore but with tendencies and emotions. From election campaigns to gaming apps, there is a tendency to consciously avoid cognitive control.

Additionally, doubting the freedom of will and the postulate of a biologically controlled behavior that also includes action, minimizes thought per se and calls the value of a logical argumentation into question.

In my opinion, these influence attempts can already reveal an infringement on the principle of non-instrumentalization by Kant.87

This amounts to an attack on human responsibility and the undermining of human autonomy.

Thomas Metzinger proposes a definition of autonomy that is close to neuroscience: For him, autonomy has to do with self-immunization, the creation of a protective shield that prevents an infection due to potential target states in the environment. When our mind wanders, we lose our mental autonomy. Mental autonomy is the ability to control our own inner actions as well as to act in a self-determined fashion on the mental level.88 (Metzinger, 2014: 136).

An argumentation that defines human autonomy very narrowly has far-reaching implications:

People who cannot decide freely, who don’t make rational considerations, should then not be allowed to make far-reaching political decisions either – in any circumstance where the wellbeing of others is concerned, we cannot have those individuals be part of the decision or vote. This also applies to their own life, especially when their personal and private decisions also impact the general public, like for example a healthy life style, the choice of a profession or leisure activities. This is made easier when not only the individual ability to make rational decisions but the ability of the individual in general is doubted.

Software as a Technical System and Revolutionary Technology

„A technology is interiorized when its use becomes second nature to the majority of the culture in question. Interiorization of a new technology influences the way thoughts are structured.” (Calleja & Schwager, 2004: 8)

The third issue is embedding our decisions in a social and macro-social context that consistently and interactively changes with the new technology.

For many technology critics, this is the pivotal aspect of their concerns: A small number of private companies control the application and further development of technologies. Herewith they do not only secure global power and influence, but they quite incidentally spread their own worldview that is behind the software production. The concentration of interests, an (at least postulated) increase of instruments of power, a seemingly unscrupulous use of the same, and an (also at least perceived) increasing dependency of the individual confront us with new ethical challenges.89 The politically explosive power of software is enormous. While on the one hand, processes of direct democracy, participation, and transparency can be integrated in software at an unprecedented scale, we are on the other hand also confronted with a so far unimaginable level of propaganda and the spreading of so-called „fake news“. The extensive simulation of a „real“ world as we know it and the merging with a digital „virtual“ environment90, as well as the strain in physically locating news and opinions produce a feeling of eroding the foundation of our worldly epistemology and an ontological dissonance. Surrounded with this uncertainty, it is hard for us to form informed opinions and attitudes that our reasons can draw from. For example, it is difficult for us to put the value of a posting in a general context. In any particular moment, the environment of the social media platform represents our social reality and we perceive the distribution of presented opinions as an adequate representation of opinions held in the overall population. Simultaneously, the structure of the interface does not allow a differentiated discussion and employs its mechanisms to steer us towards emotional, not rational statements.91

Criticism of the „Silicon Valley ideology“ is currently receiving the most attention. (Rid, 2016; Schlieter, 2015; Turner, 2006)

Mostly, criticism of Silicon Valley is representative for software development in the entire world and the effects of digitalization and platform capitalism. The prominent position and the myths surrounding Silicon Valley make it an ideal projection surface for the discourse in social and political circles. Jaron Lanier notes that he is reminded of marxism by the idea of complete control over people and society while claiming to pursue wealth distribution. Singularity, the noosphere, or the idea that a collective consciousness of web users could be created, reminds him of Marx‘ social determinism. (Lanier, 2014)

It was about a detached elite planning the future of the world to propagate a very homogenous „biotope“ of white, well-educated engineers endowed with almost infinite financial resources and imprinting their ideas and norms on the entire world92. (Wajcman, 2017) Likewise, Lanier, Thomas Rid, Nicolas Carr, and Frank Pasquale argue that it is only a relatively small group of programmers and investors that decide today how the majority of people experience the world. (Carr, 2015; Lanier, 2009; Pasquale, 2015; Rid, 2016)

However, software also confronts us with other, ontological challenges:

Suddenly machines make the decisions – for us and about us. The interplay of human and machine on a mechanical level is common place and so is the fact that machines reduce our workload. We have gotten used to the idea that in very defined contexts and without our input, machines make decisions that we have preprogrammed. What if they now suggest decisions to us without being programmed by us?

This is unsettling to us. The contrast between a clean, perfect artificial intelligence that is superior to us humans in seemingly all cognitive aspects and a flawed being trapped in a mortal, aging body, intimidates us.

The discourse around artificial intelligence and the so-called singularity is characterized by anti-human rhetoric. This is equally fascinating as self-destruction. It disturbs but fascinates at the same time. An objective of this rhetoric would be to portray humans as obsolete in order for computers to appear the more progressive. In these utopias, humans would be portrayed as either hopelessly outdated and left behind or as technologically optimized organisms intertwined with technology, as Übermensch. (Hofstetter, 2015) What could we even hold against the Cloud – as a wonderful metaphor for the domicile of our new authority93 ? We feel pressure to delegate decisions to software because it is able to make decisions with much more accuracy and objectivity and simultaneously we also feel relief being able to entrust someone with the increase of complexity in our lives.

This criticism goes back to Günter Anders who did not see technology as a value-free means to an end. His assumption was that the specification of devices already determined their application and that specific economic, social, and political circumstances produce a technology that in turn would entail specific economic, social, and political changes and thus technology would transition from object to subject of history. Humans would not be able to recognize the structural power of devices anymore, cope with emotional and cognitive constraints and perceive themselves as deficient. Humans would feel a Promethean shame faced with their own inventions.

For Andrew Feenberg, the ubiquity of software represents a technological system (Feenberg, 2016) that keeps us trapped. We would have lost the ability to see ourselves detached from it. (Feenberg, 2017b)

When we set foot in an airport – or most recently starting with the booking of a flight – we find ourselves within a technological system that completely entangles us, tells us what to do and where to go. Whoever does not abide by it gets into trouble, like everyone has surely experienced before.

Even though the ground personnel acts like humans, it does not change the fact that we move within a technological system, much like in the “Gestell according to Heidegger“. (Feenberg, 2009: 15)

So far, a deterministic view of technology has been the norm: Technological progress was the direct result of scientific development. This development would take place outside of society. Technology would imprint on and mold society under the production conditions of capitalism. However, the technological development was considerably less defined and society also impacted technology in the form of a co-production of society and technology. (Feenberg, 2017a)

This theory can be made clear by looking at software development. The interactions between research, application, and use are so strong that often they cannot be separated and a reciprocal interference at a value and structure level is clearly the case.

Graham Harman’s criticism of digitalization and the Internet of Things cites Heidegger. (Harman, 2010)

Technology could not be seen as an extended tool of humans but would entail its entirely own rules. The character of domination exuding from modern technology causes him concern. So, this would in itself create new opinions and imperatives and a respective awareness of triumph: for example, when the creation of factories that in turn create factories is perceived as fascinating. According to Heidegger, all this bears the risk that „the use turns into over-use“ (Heidegger; [145]GA 7: 87) and the objective of technology would be its own purposelessness anymore.

Hence, humans would become on one hand the rulers of earth and on the other be deprived of power due to the distortion of purpose-means-balance and made into a mere moment of the all-encompassing technological process.94 (Harman, 2005)

Bruno Latour notices that, despite all its power, this all-encompassing technological process, this ubiquitous system of software is hardly visible. He calls the disappearance of a well-introduced technology from our focus an “optical illusion”. It would conceal how much we had adapted ourselves to this technology:

„if we fail to recognize how much the use of technique, however simple, has displaced, translated, modified or inflected the initial intention, it is simply because we have changed the end in changing the means, and because, through. A [SN1] slipping of the will, we have begun to wish something quite else from what we at first desired.” (Latour, 2002: 250)

Marc Zuckerberg portrays Facebook as „utility“ and signals that Facebook should be so common-place in our lives as the telephone and the water supply network. This is probably the case for many already. When Vic Gundotra, VP of social networking at Google, postulates „Technology should get out of the way so you can live, learn and love“ (Constine, 2013), then this is attractive at first sight for the maker as well as the user. „When Technology gets out of the way, we are liberated from it“(Bilton, 2012), writes the New York Times columnist Nick Bilton. The view appears almost naive because the technologies that so far have simply disappeared from our everyday lives are not comparable with information technology at the same level. Structural characteristics are similar, the degree of digitalization, the networking and interaction exceed the electricity network.

Luciano Floridi, who sees a scientific revolution in rational machines, describes how extensively a paradigm change can take place almost gradually. For him, it is about the fourth of this kind, after the Copernican revolution, Darwin’s evolution theory, and Freud’s psychoanalysis. (Floridi, 2015)

He introduces the term info sphere95, that in his definition encompasses the digital, the analog and the offline environments. Increasingly, analog goes digital. In the digital realm, the tools (programs) are designed just like the processed product (data). Information would have the characteristics of a public good: Unlike for example a pizza, information would not get lost when shared with others. In the info sphere, humans would be connected with machines via interfaces – ideally without realizing that these interfaces even exist or how many processes are executed in the background. The boundaries between offline and online worlds would become fluid until life would become „onlife“. Technology would become part of identity. The social self that is generated hereby reflects back on identity and self-image of the user. (Floridi, 2007, 2015)

Frank Pasquale points to the emotional aspect of software use and its significance for our norms and our moral approach to everyday life. (Pasquale, 2015)

History96 is full with technology criticism that assessed technologies incorrectly, in particular when these technologies simultaneously brought a societal paradigm change that also changed the normative reference frame.97

All cited authors stress such a profound change of conditions and of our self-conception in this new context. The massive distribution of software confronts us with entirely new situations for which we still lack a well-established, normative value system. Software opens up new opportunities and options that we did not have before.

Hence, we are also lacking the experience to make self-conscious decisions. With the simulation of analog situations, we often rely on our experiences from the analog world in which we have gathered these experiences. This makes us vulnerable for poor decisions, as we cannot assess and evaluate the implications of our actions yet.

This applies for direct causal effects as well as the consequences of our decisions. We often do not know what the click of a button can cause and what kind of responsibility we accrue from it. The simulation of an analog, local process often leaves us with the assumption that the effects of our software interaction would remain local and limited. I have shown above that this is not the case and that singular, local actions can turn into global mass phenomena very quickly.98

Furthermore, technology changes the way we think about the world and act in it. Herewith it also changes our normative basis, as Nikil Mukerji argues:

“Just like the laws of physics are reasonably seen as eternal and changeless, the basic principles that underlie our moral duties may be supposed to be unalterable. Though that may in fact be true, the changes that our empirical world undergoes – and that includes technological changes – may nevertheless change the way we think about the issues that lie at the heart of normative ethics. (Mukerji, 2014: 34)

Our interaction with software not only changes the way how we think about the world but also how we think that we should act.

Moreover, we frequently navigate in an area that is not legally defined yet99 and still requires many things to be negotiated.

This does not make it easier for us to decide.

Conclusion

„What makes something real is that it is impossible to represent it to completion” (Lanier, 2014)

With the progress of digitalization, we increasingly make decisions based on digital conditions100, more precisely under digital decision-making conditions.

As we have demonstrated, our decisions still remain free and we remain responsible for them.

However, this freedom is conditional and we do not escape the boundaries of our environment. It is only within these boundaries that we can become operative. (Nida-Rümelin, 2011)

Each technology offers us the opportunity to extend or to narrow these boundaries.

In the case of digitalization, it increases our options for action enormously and consequently our decision options as well. With each increase in options and additional option for action, however, our responsibility increases as a matter of principle.101 It does not decrease but is getting more extensive. This creates pressure for us. At the same time, we perceive the epistemic basis for these decisions as increasingly less certain and less under our control.

To the same extent as software increases our options for action, it also offers to decrease those for us and to relieve us from the burden of decisions. Again, it is a decision we make, to what extent we trust in it.

Software is a technology that we cannot fully grasp conceptually and experientially yet. It is tool and language at the same time and bears features of liveliness that put it in the vicinity of the human spirit.

In any case, software at its core is a control technology, consisting of directions and decisions. This feature also unfolds in interactions with us – it controls FOR us, but also controls us because it is in its nature to control.

The interaction with us humans is manifold and likely never before in the history of mankind, there have been this many different interests affecting a single person. Software always formulates a demand from us – we are supposed to decide, watch, process. Software knows no physical fatigue. It embodies and produces phenomena that we perceive as restlessness and acceleration.

This poses philosophical, psychological, social, and consequently ethical challenges for us, which will be on our minds in the years to come. These challenges have no universal answer.

As is the case for every technology, the use of software requires an ethical analysis on a case-by-case basis because the scenarios are too complex and different.

My personal conclusion is old-fashioned and does not seem to fit with the times anymore:

Immanuel Kant establishes a connection between human dignity and freedom. A human being has the freedom and capability to act autonomously, hence in accordance with self-defined principles, which imparts dignity.

Supposing that ego-strength, will power, and power of judgment are the abilities to implement our own considerations (Nida-Rümelin, 2005) and to live our autonomy, then a strengthening of rationality and power of judgment is needed.

If it is hereby about a neurophysiological training program for our self-model or a humanistic education program – the focus is always learning to control our own thoughts, learning to consider rationale and a stringency in action and the implementation of our decisions.

This is going to be a critical competency for digitalization rather than the effortless handling of interfaces.

Software competency does not mean to master every program language, but to understand how software is created and what concepts it is implementing, what it is making possible and what it is excluding.

In biographically or societally relevant situations of decision-making, we do not always want to rely on software as a shield against the flood of information and influence or as help in the decision process.

An appeal for a new awareness seems too pathetic in my opinion. However, the often used citation from Kant’s102 „Answer to the question: What is enlightenment“ has not lost its significance over the last 234 years and proves its timeliness in the context of software use. „Sapere aude!“ is more relevant today than ever before.

Self-confidence, power of judgment, and maturity are our personal firewall in the age of digitalization.

Acknowledgment

My thanks go foremost to Professor Nida-Rümelin for accepting this topic as a master thesis. With his unparalleled clear and structured demeanor, he was instrumental in putting the topic into focus and has provided valuable tips in my literature research, some of which I would not have thought of on my own.

Professor Verena Mayer has given valuable tips and support in the master seminar and Nikil Mukerji likewise has contributed valuable literature resources. My special thanks are reserved for my fellow students in the PPW study program who were always available for an open discussion of master work topics.

I would like to thank Martin Klein and other work colleagues for their contributions and their willingness to discuss company-internal topics and unpublished research.

I am grateful to my partner in life, Sonja, for her understanding my shift in focus away from time spent together and for her proof-reading the finished work.

Markus Walzl

Vienna, November 19, 201

Literature and Sources

Académie française. (Ed.) Dictionnaire de l'Académie française, neuvième édition (ninth edition, ed.).

Aghajan, Z. M., Acharya, L., Moore, J. J., Cushman, J. D., Vuong, C., & Mehta, M. R. (2014). Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality. Nature Neuroscience, 18, 121. doi:10.1038/nn.3884 https://www.nature.com/articles/nn.3884 - supplementary-information

Algorithm Watch. (2017). Das ADM Manifest. Retrieved from https://algorithmwatch.org/de/das-adm-manifest-the-adm-manifesto/

Anderson, C. (2008). The end of theory: the data deluge makes the scientific method obsolete. Wired (06/23/2008).

Assheuer, T. (2017, 04/16/2017). Die Hippies sind schuld. Die Zeit.

Austin, D. (2017). Alexa, what makes you so Habit-Forming. Retrieved from https://www.nirandfar.com/2017/06/how-amazons-alexa-hooks-you.html

Bachimont, B. (2008). Signes formels et computation numérique: entre intuition et formalisme. In H. Schramm & L. Schwarte (Eds.), Instrumente in Kunst und Wissenschaft - Zur Architektonik kultureller Grenzen im 17. Jahrhundert. Berlin: Walter de Gruyter Verlag.

Bard, A., & Söderqvist, J. (2012). The Futurica Trilogy. Stockholm: Stockholm Text.

Baudrillard, J. (1981). Simulacres et Simulation. Paris: Éditions Galilée.

BEA. (2012). Final Report: On the Accident on 1st June 2009 to the Airbus A330-203, Registered F-GZCP, Operated by Air France, Flight AF447, Rio de Janeiro to Paris. Retrieved from www.bea.aero/docspa/2009/f-cp090601.en/pdf/f-cp090601.en.pdf

Beck, E. (2016). A theory of persuasive computer algorithms for rhetorical code studies. Enculturation, 23.

Beckermann, A. (2012). Aufsätze. Vol. 1: Philosophie des Geistes. Bielefeld. Bielefeld: Universitätsbibliothek Bielefeld.

Beebee, H., Hitchcock, C., & Menzies, P. (2012). The Oxford Handbook of Causation. Oxford: Oxford University Press.

Beland, L.-P., & Murphy, R. (2016). Communication: Technology, distraction & Student performance. Labour Economics, 41 (August 2016), 61-76.

Bilton, N. (2012). Disruptions. Next Step for Technology is becoming the background. New York Times.

Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53 (211), 243-255.

Bostrom, N. (2016). Superintelligenz. Szenarien einer kommenden Revolution. Frankfurt am Main: Suhrkamp.

Brasel, S. A., & Gips, J. (2014). Tablets, touchscreens, and touchpads: how varying touch interfaces trigger psychological ownership and endowment. Journal of Consumer Psychology, 24, 226-233.

Brasel, S. A., & Gips, J. (2015). Interface Psychology: Touchscreens Change Attribute Importance, Decision Criteria, and Behavior in Online Choice. Cyberpsychology, Behavior and Social Networking, 18, 534-538.

Breit, L., & Redl, B. (2017, 08/18/2017). Wir Selbstvermesser. Der Standard, p. 23.

Brock, K. (2013). Engaging the Action-Oriented Nature of Computation: towards a Rhetorical Code Studies. Retrieved from NCSU Digital Repository. North Carolina State University: http://repository.lib.ncsu.edu/ir/handle/1840.16/8460.

Buchanan, M. (2015). Physics in finance: Trading at the speed of light. Nature, 518, 161-163.

Calleja, G., & Schwager, C. (2004). Rhizomatic cyborgs: hypertextual considerations in a posthuman age. Technoetic Arts, 2 (1), 3-15. doi:10.1386/tear.2.1.3/0

Carr, N. (2015). The Glass Cage. London: The Bodley Head.

Carr, N. (2017, 10/07/2017). How smartphones hijack our minds. The Wall Street Journal.

Charisius, H. (2016). Trugbilder im Hirnscan. Süddeutsche Zeitung (07/05/2016).

Christl, W., & Spiekermann, S. (2016). Networks of Control. A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy. Retrieved from Wien:

Conly, S. (2013). Against Autonomy. Justifying coercive paternalism. Cambridge, MA: Cambridge University Press.

Constine, J. (2013, 05/15/2013). Google Unites Gmail And G+ Chat Into “Hangouts” Cross-Platform Text And Group Video Messaging App. Techcrunch. Retrieved from https://techcrunch.com/2013/05/15/google-hangouts-messaging-app/

Dennett, D. (2013, 05/22/2013) You can make Aristotle look like a flaming idiot/Interviewer: J. Baggini. The Guardian, London.

Dennett, D. (2017). A History of Qualia. Topoi. Springer Science+Business Media B.V.

Deterding, S., Rilla, K., Nacke, L., & Dixon, D. (2011). Gamification: Towards a Definition. Paper presented at the Workshop on Gamification at the ACM Intl. Conf. on Human Factors in Computing Systems (CHI).

Dilger, B. (2000). The Ideology of Ease. Journal of Electronic Publishing, 6 (1).

Dworkin, G. (2017). Paternalism. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2017 ed.): Metaphysics Research Lab, Stanford University.

Elder, R., & Krishna, A. (2012). The “visual depiction effect” in advertising: facilitating embodied mental simulation through product orientation. Journal of Consumer Research, 38, 988-1003.

Epp, C., Lippold, M., & Madryk, R. L. (2011). Identifying Emotional States Using Keystroke Dynamics. Paper presented at the Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (CHI 2011), Vancouver.

Evans, J. S. B. T. (2006). The heuristic-analytic theory of reasoning: Extension and evaluation. Psychonomic Bulletin & Review, 13 (3), 378-395. doi:10.3758/bf03193858

Eyal, N. (2014). Hooked: How to Build Habit-Forming Products (R. Hoover Ed.). New York: Penguin.

Feenberg, A. (2009). Function and Meaning: The Double Aspects of Technology. Paper presented at the Conference on Technology, the Media and Phenomenology, Stockholm.

Feenberg, A. (2016, February-April) Part of the Technical System/Interviewer: Z. Boang. New Philosopher (Vol. Issue 11).

Feenberg, A. (2017a). A Critical Theory of Technology. In U. Felt, R. Fouché, C. A. Miller, & L. Smith-Doerr (Eds.), The Handbook of Science and Technology Studies (pp. 635-664). Cambridge Massachusetts: MIT Press.

Feenberg, A. (2017b). Technosystem: The social life of reason. Cambridge: Harvard University Press.

Fishman, C. (1996). They write the right stuff. FastCompany Magazine (December).

Floridi, L. (2007). A look into the future impact of ICT on our lives. The Information Society, 23 (1), 59-64.

Floridi, L. (2015). Die 4. Revolution: Suhrkamp.

Foer, F. (2017). World Without Mind. Jonathan Cape: Penguin.

Freund, T. (2006). Software Engineering durch Modellierung wissensintensiver Entwicklungsprozesse. Berlin: Gito-Verlag.

Frick, W. (2015). When Your Boss Wears Metal Pants. Harvard Business Review (June 2015), 84-89.

Gigerenzer, G. (2007). Bauchentscheidungen. Die Intelligenz des Unbewussten und die Macht der Intuition. München: Bertelsmann.

Google. (2017). Hack - Programing productivity without breaking things. Retrieved from http://hacklang.org/

Greenfield, A. (2017). Radical Technologies. London: Verso.

Hackett, R. (2016). Watch Elon Musk Divulge His Biggest Fear About Artificial Intelligence: Fortune Magazine.

Harman, G. (2005). Heidegger on Objects and Things. In B. Latour & P. Weibel (Eds.), Making Things Public: MIT Press.

Harman, G. (2010). Technology, Objects and Things in Heidegger. Cambridge Journal of Economics, 34 (1), 17-25.

Harper, R. (2016). Practical Foundations for Programming Languages. Cambride: Cambridge University Press.

Hayles, N. K. (2005). My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: University of Chicago Press.

Heersmink, R. (2016). The Internet, Cognitive Enhancement, and the Values of Cognition. Minds and Machines, 26 (4), 389-407.

Helbing, D. (2015). "Big Nudging" - zur Problemlösung wenig geeignet. Spektrum der Wissenschaft (11/12/2015).

Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., . . . Zwiter, A. (2015). IT-Revolution: Digitale Demokratie statt Datendiktatur. Das Digital Manifest. Spektrum der Wissenschaft, Hintergrund (12/17/2015).

Hofstetter, Y. (2015). Wenn intelligente Maschinen die digitale Gesellschaft steuern. Spektrum der Wissenschaft (11/12/2015).

Hürter, T. (2016). Alles ist 0 und 1. Hohe Luft (01/2017. Sonderbeilage Digitalisierung: Schlauer als wir), 16-17.

Introna, L. D. (2011). The enframing of code: Agency, Originality and the Plagiarist. Theory, Culture and Society, 28 (6), 113-141.

James, W. (1899). Talks to Teachers on Psychology: and to Students on Some of Life's Ideals: Dover Publications 2001.

Johnson, E. J., Shu, S. B., Dellaert, B. G. C., & Fox, C. (2012). Beyond Nudges: Tools of a Choice Architecture. Marketing Letters, 23:2, 487-504.

Kahnemann, D. (2011). Thinking, Fast and Slow. London: Penguin.

Kaku, M. (2011). Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. London: Doubleday.

Kant, I. (1784). Beantwortung der Frage: Was ist Aufklärung?". Published in 1784. "Berlinische Monatsschrift".

Kirkpatrick, D. (2011). The Facebook effect. New York: Simon & Schuster.

Kitchin, R., & Dodge, M. (2011). Code/Space - Software and Everyday Life. Cambridge, Massachusetts: The MIT Press.

Kurzweil, R. (2012). How to create a Mind: the Secret of Human Thought Revealed. New York: Viking Books.

Lanier, J. (2009). You are not a gadget. London: Penguin Books

Lanier, J. (2014). Who owns the future. London: Penguin Books Ltd.

Latour, B. (2002). Morality and Technology: The End of the Means. Theory, Culture and Society, 19, 247-260.

Lessig, L. (2000). Code is law. On Liberty in Cyberspace. Harvard Magazine, 1.

Liessmann, K. P. (2017). Panel discussion. Paper presented at the 20. Philosophicum Lech, Lech.

Lobo, S. (2014, 01/12/2014). Das Ende der Utopie: Die digitale Kränkung des Menschen. Frankfurter Allgemeine Zeitung.

Logg, J. M. (2017). Theory of Machine: When do people rely on Algorithms? Paper presented at the AoM Annual Meeting 2017, Atlanta.

Ludewig, J., & Lichter, H. (2007). Software Engineering: dpunkt Verlag.

Mackenzie, A. (2006). Cutting Code: Software and Sociality: Peter Lang.

Manjoo, F. (2017, 05/10/2017). Tech’s Frightful Five: They’ve Got Us. The New York Times. Retrieved from https://www.nytimes.com/2017/05/10/technology/techs-frightful-five-theyve-got-us.html

Mau, S. (2017) Das metrische Wir/Interviewer: A. Lobe. (Vol. 07), Die Zeit.

McCandless, D. (2015). Codebases. Millions of lines of code. : information is beautiful.

McLuhan, M. (1994). Understanding Media: The Extensions of Man. Cambridge: MIT Press.

Meier, c. (2017, 13.8.2017). Unser Freund, der Algorithmus. Welt am Sonntag. Retrieved from https://blendle.com/i/welt-am-sonntag/unser-freund-der-algorithmus/bnl-wams-20170813-53_1

Metzinger, T. (2004). Being no one: The self-model theory of subjectivity. Cambridge, MA: MIT Press.

Metzinger, T. (2014). Der Ego-Tunnel. Eine neue Philosophie des Selbst: von der Hirnforschung zur Bewusstseinsethik. Berlin: Piper Verlag.

Montag, C. (2016). Persönlichkeit. Auf der Suche nach unserer Identität. Berlin Heidelbert: Springer.

Mukerji, N. (2014). Technological progress and responsibility. In F. Battaglia, N. Mukerji, & J. Nida-Rümelin (Eds.), Rethinking Responsiblity in Science and Technology (Vol. RoboLaw series; 3, pp. 25-36). Pisa: Pisa University Press.

Musk, E. (2016, 07/15/2017) Elon Musk Says Artificial Intelligence Is the 'Greatest Risk We Face as a Civilization'/Interviewer: D. Z. Morris. Fortune.com, Fortune Magazine.

Negroponte, N. (1995). Being Digital. New York: Alfred A. Knopf.

Nida-Rümelin, J. (2001). Strukturelle Rationalität. Stuttgart: Reclam.

Nida-Rümelin, J. (2005). Über menschliche Freiheit. Stuttgart: Reclam.

Nida-Rümelin, J. (2011). Verantwortung. Stuttgart: Reclam.

Nida-Rümelin, J. (2014). On the concept of responsibility. In F. Battaglia, N. Mukerji, & J. Nida-Rümelin (Eds.), Rethinking Responsiblity in Science and Technology (Vol. RoboLaw series; 3, pp. 13-24). Pisa: Pisa University Press.

Nida-Rümelin, J., & Singer, W. (2006, 01/21/2006) Gehirnforscher sind doch keine Unmenschen" - "Aber vielleicht leiden sie an Schizophrenie?" - Julian Nida-Rümelin und Wolf Singer: Geist contra Großhirn/Interviewer: B. Mauersberg & C. Pries. Frankfurter Rundschau Magazin.

Nida-Rümelin, J., & Singer, W. (2011). Über Bewusstsein und Freien Willen. In T. Bonhoeffer & P. Gruss (Eds.), Zukunft Gehirn (pp. 253-277). München: C.H.Beck.

Noe, A. (2010). Out of our heads. New York: Hill and Wang.

Noessel, C. (2017). Designing Agentive Technology: AI That Works for People. New York: Rosenfeld.

O’Neil, C. (2016). Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy. London: Penguin Random House.

Oracle. (2016). Oracle Gamification Guidelines. Retrieved from http://www.oracle.com/webfolder/ux/Applications/uxd/assets/sites/gamification/phases.html

Parker, S. (2017a, 11/10/2017). Gott weiß, was Facebook mit den Gehirnen unserer Kinder macht. Frankfurter Allgemeine Zeitung. Retrieved from http://www.faz.net/aktuell/wirtschaft/diginomics/sean-parker-ueber-facebooks-nutzer-manipulation-15286051.html

Parker, S. (2017b, 9.11.2017) Sean Parker: Facebook was designed to exploit human "vulnerability"/Interviewer: M. A. Wednesday. Axios.

Pasquale, F. (2015). the Algorithmic Self. The Hedgehog Review, 17 (1).

Passig, K. (2009). Standardsituation der Technologiekritik. Merkur, 727, 1144-1150.

Penrose, R. (2009). Computerdenken. Die Debatte um künstliche Intelligenz. Bewußtsein und die Gesetze der Physik. Heidelberg: Spektrum Verlag.

Platon. (1979). Phaidros oder Vom Schönen. Adapted and prefaced by Kurt Hildebrandt (K. Hildebrandt, Trans.) Reclams Universal-Bibliothek 5789 (p. 33). Stuttgart: PHILIPP RECLAM JUN.

Potvin, R. (2015). Why Google Stores Billions of Lines of Code in a Single Repository. DevTools@Scale.

Prinz, W. (2013). Selbst im Spiegel. Berlin: Suhrkamp.

Reigeluth, T. B. (2014). Why data is not enough: Digital traces as control of self and self-control. Surveillance & Society, 12 (2), 243-254.

Rid, T. (2016). Maschinendämmerung. Eine kurze Geschichte der Kybernetik. Berlin: Ulstein.

Rifkin, J. (2014). The Zero Marginal Cost Society. Palgrave: Macmillan.

Rosenberg, D. (2013). Data Before the Fact. In L. Gitelman (Ed.), “Raw Data” Is an Oxymoron. Cambridge, MA: MIT Press.

Ross, N., & Tweedie, N. (2012, 04/28/2012). Air France Flight 447: 'Damn it, we’re going to crash’. The Telegraph.

Roth, G. (2003). Fühlen, Denken, Handeln. Frankfurt am Main: Suhrkamp.

Russel, S., & Norvig, P. (2012). Künstliche Intelligenz. Ein moderner Ansatz (2nd edition). München: Pearson Education.

Saval, N. (2014, 04/22/2014). The Secret History of Life Hacking. PacificStandard. Retrieved from https://psmag.com/economics/the-secret-history-of-life-hacking-self-optimization-78748

Schade, O., Scheithauer, G., & Scheler, S. (2017). 99 Bottles of Beer - one program in 1500 variations. Retrieved from http://www.99-bottles-of-beer.net/

Schlieter, K. (2015). Die Herrschaftsformel: Westend Verlag.

Schlosser, A. (2003). Experiencing products in a virtual world: the role of goals and imagery in influencing attitudes versus intentions. Journal of Consumer Research, 30, 377-383.

Schlosser, A. (2006). Learning through virtual product experience: the role of imagery on true versus false memories. Journal of Consumer Research, 33, 377-383.

Searle, J. R. (2006). Social ontology: Some basic principles. Anthropological Theory, 6 (1), 12-29. doi:10.1177/1463499606061731

Searle, J. R. (2007). John Searle. Talks at Google. Authors@Google.

Sedgewick, R., & Wayne, K. (2011). algorithms (4th edition ed.). New-York: Addison-Wesley.

Simon, H. A. (1959). Theories of decision making in economics and behavioral science. American Economic Review., 49 (3), 253-283.

Skinner, B. F. (2014). Contingencies of Reinforcement: A theoretical analysis (J. S. Vargas Ed. Vol. Book 3): B.F. Skinner Foundation.

Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333 (6043), 776-778. doi:10.1126/science.1207745

Sparrow, J. (2014). Soylent, Neoliberalism and the Politics of Life Hacking. Retrieved from counterpunch.org/2014/05/19/solyent-neoliberalism-and-the-politics-of-life-hacking/

Suchman, L., & Sharkey, N. (2013). Wishful Mnemonics and Autonomous Killing Machines. AISB Quarterly, 136 (May), 14-22.

Sunnstein, C. R. (2015). Nudging and Choice Architecture: Ethical Considerations. SSRN Electronic Journal.

Thaler, R. H., & Sunnstein, C. R. (2009). Nudge. Improving Decisions about Health, Wealth and Happiness. London: Penguin.

Thrift, N., & French, S. (2002). The automatic production of space. Transactions of the Institute of British Geographers, 27 (3), 309-335. doi:10.1111/1475-5661.00057

Turing, A. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42 (2), 230-265.

Turner, F. (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Chicago: University of Chicago Press.

van Nimwegen, C. (2008). The paradox of the guided user: assistance can be counter-effective. (PhD), Universiteit Utrecht, Utrecht.

Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-Human Era. Paper presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center, Ohio Aerospace Institute.

VisuAlgo. (2017). Retrieved from https://visualgo.net/de

vom Brocke, J., Riedl, R., & Léger, P. M. (2013). Application Strategies for Neuroscience in Information Systems Design Science Research. Journal of Computer Information Systems (53), 1-13.

Wajcman, J. (2017). Automation: is it really different this time? The British Journal of Sociology, 68 (1), 119-127.

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research, 2 Number 2 April 2017 (The Consumer in a connected world), 140-154.

Wegner, D. M., & Ward, A. F. (2013). The Internet Has Become the External Hard Drive for Our Memories. Scientific American, Dec 1, 2013.

Weinmann, M., Schneider, C., & vom Brocke, J. (2015). Digital Nudging. Business & Information Systems Engineering, 58 (6), 433-436.

Weiser, M. (1999). The Origins of Ubiquitous Computing Research at PARC in the Late 1980s. IBM Systems Journal, 38 (no. 4), 693-696.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. New York: W.H.Freeman.

Whitson, J. (2013). Gaming the quantified self. Surveillance & Society, Futures 11(1/2) (Special Issue on Surveillance).

Wikipedia. (2017a). Wikipedia Retrieved 10/06/2017 https://de.wikipedia.org/wiki/Software

Wikipedia. (2017b). Algorithmus. Wikipedia Retrieved 11/02/2017 https://de.wikipedia.org/wiki/Algorithmus

Wimmer, M. (2014). Antihumanismus, Transhumanismus, Posthumanismus: Bildung nach ihrem Ende Menschenverbesserung - Transhumanismus. Jahrbuch für Pädagogik 2014 (pp. 237-265). Frankfurt am Main: Lang.

Yeomans, M., Shah, A. K., Mullainathan, S., & Kleinberg, J. (2017). Making Sense of Recommendations. Paper presented at the AOM Annual Meeting, Atlanta.

Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science, Technology & Human Values, 41(1), 3-16.

Zweig, K. (2016). 1. Arbeitspapier: Was ist ein Algorithmus? Retrieved from Berlin: https://algorithmwatch.org/de/arbeitspapier-was-ist-ein-algorithmus/

Zweig, K. (2017, 06/21/2017). Die Macht der Algorithmen. Paper presented at the Autonome Systeme. Wie intelligente Maschinen uns verändern. Annual meeting of the German Council on Ethics, Berlin.

[...]


1 This pertains to the companies Amazon, Apple, Alphabet, Facebook and Microsoft, in reference to the Marvel comic characters of the „Frightful Four“. This name was minted by Farhad Manjoo in the New York Times. (Manjoo, 2017)

2 see (Bostrom, 2016; Carr, 2015; Hofstetter, 2015; Lobo, 2014; Rid, 2016; Schlieter, 2015)

3 Elon Musk has clarified in the meantime that he doesn’t believe artificial intelligence will develop its own will. However, the consequences of unintentional human use could definitely be serious. (Hackett, 2016)

4 In the early 1970ies, the world experienced its first software crisis. For the first time, software was more expensive than hardware, big projects failed. The US government demanded from IBM to list program code separately as ‚software’ on its invoices and herewith set a quasi-standard.

5 „Logiciel“ in French more strongly points to its origin – a neologism from „logique and materiel, qui concerne les aspects logiques d’un ordinateur ou d’un système de traitement de l’ information , par opposition aux aspects matériels .“ (Académie française)

6 The prime example is the program ‘Word’ that I am using to write this work and which developed into a paradigm of ‚word processing’ with functions that by far exceed an electronic typewriter.

7 The development of cybernetics as base for the spread and popularity of software, also to solve non-technical problems, is described by many authors, for example by Michael Wimmer (Wimmer, 2014). A very detailed, journalistic description is given by Thomas Rid (Rid, 2016)

8 Primarily, these are JavaScript, Python, SQL, PHP, Ruby, Perl and their derivatives.

9 For those who would like to experience the full bandwidth by play, the German page „99 Bottles of Beer“ is recommended. Here, the lyrics of the song „99 bottles of beer“ is given in 1,500 different programming types. (Schade, Scheithauer, & Scheler, 2017)

10 An example is Facebook with its further development of php, the so-called „hack“ auf hacklang.org. (Google, 2017)

11 If this is much or little, remains to be seen. The German Bürgerliche Gesetzbuch has 184,000 words. With an average length of a sentence of 23 words for legal text, this makes 8,000 sentences of „behavioral instruction“. The US-American Constitution on the other hand contains only 4,400 words.

12 Which is annoying as a customer, since with software there is always a piece of hope in terms of sustainability. Consequently, the decision does not only include the current quality of the software, but also an estimation of the company and its ability to keep the software up-to-date concerning security and adaptation to the requirements and the ability to implement the promised features.

13 This approach refers to a hierarchy that is comparable to the layers of our brain and equivalent with animals: The so-called hardware abstraction layer corresponds to the nervous system and this in turn to single-celled organisms in the animal kingdom. The brainstem would contain something like an operating system enabling functions that could be executed by early multi-cellular organisms. Acquired abilities like driving a car, riding a bike, etc. would be like classic application programs suited for limited problems. Artificial intelligence would correspond to the cerebral cortex. (Nguyen, 2003 in Kitchin & Dodge, 2011)

14 As an analogy, it is not definitional for the term ‚opera‘ or ‚Zauberflöte‘ whether it is performed in a theater, broadcast via radio/TV or sold as CD. (Ludewig & Lichter, 2007: 34)

15 Although, to a limited extent, this is possible and used as a method to break encryption.

16 For an extended discussion on the topic, I refer to contributions by, among others, Sir Roger Penrose (Penrose, 2009), John Searle (Searle, 2006), Daniel Dennett (Dennett, 2013) or Alva Noe (Noe, 2010).

17 Kitchin and Dodge use the term in the same way as „data“ in this context. (Kitchin & Dodge, 2011)

18 In reference to the software topic, articles by for example John Searle (Searle, 2006), Alva Noe (Noe, 2010), Daniel Dennett (Dennett, 2013), Sir Roger Penrose (Penrose, 2009) and Thomas Metzinger (Metzinger, 2004) should be mentioned.

19 Even when the focus is on software, one may not forget that digitalization is an excessively material process – despite chips getting increasingly smaller and the architecture and materials evolve under the pressure of cost, miniaturization and energy consumption, they are after all the material basis for the digital world – cell towers, transoceanic cables, routers, and huge cloud farms are anything else but immaterial. Let alone the internet of things and Smartphones.

20 Leibniz, who in the 17th century came across the binary system, respectively described it, thought that everything could be calculated. At the same time, he invented infinitesimal calculus, characterizing the world as smooth structure without any levels and jumps. (Hürter, 2016)

21 https://de.wikipedia.org/wiki/DNA-Computer

22 Why software „runs“ unfortunately remains an unanswered question for me.

23 The term supposedly goes back to an Arabic astronomer in the 9th century and malapropism that followed over time. (Zweig, 2016)

24 Sort or Search are other application groups.

Very clear examples even for non-mathematicians are given on the site visualgo.net, where hands-on algorithmic problem-solving can be undertaken. (VisuAlgo, 2017)

25 This characterization already carries an elevation and technology promise in itself, namely that of immortality and the myth of the human as creator: Frozen like a fertilized egg, the algorithm is waiting for its discovery to then through the restless – what’s more, independent of time and space (!) – execution of its creator’s idea secure eternal life for him. With this short attempt at interpretation, I intended to point out here already that the discussion around algorithms and software routinely bears many elements of modern mythology and also ideology. It is far from proceeding pragmatically as sometimes is suggested.

26 An often-used example is the Shortcut-example: In this case, it has to be exactly determined which type of data can be used (current traffic density, road capacities, street length, current construction zones) and how optimization should occur: options are minimizing the total length or are minimizing the expected drive time. Other optimization criteria could be: preferably scenic route, preferably few turns or avoiding highways, etc. (Zweig, 2016)

27 Software-bias is understood as prejudice, value constructs and attitudes of programmers and their environment that are transferred to software.

28 Connecting two leading science disciplines – informatics and neurosciences – fuels the speculation about the possibilities and dangers of artificial intelligence. This is not the topic of this work.

29 Controversial examples are terror surveillance and predictive policing: Is surveillance aimed at identifying every potential criminal and hereby sweeping up many law-abiding citizens or even putting them in preventive detention or should it only step in when the probability is very high and no innocent is accused?

30 I refer to Weizenbaum’s ELIZA experimental design (Weizenbaum, 1976).

31 See Wolfgang Prinz, who considers planning a key characteristic of agentivity, seeing it as the ability „to find appropriate means for the achievement of given goals“ (Prinz, 2013: 187)

32 Chris Noessel, in my opinion, makes a very useful classification:

Artificial General Intelligence, Artificial Superintelligence („Singularity“ would fall in this category (Vinge, 1993)) and Artificial Narrow Intelligence. With the latter, he distinguishes between assistants and agents, but more to this topic later.(Noessel, 2017)

33 He is adding: “We’d have to place a whole lot of trust in the people and companies running the system.” (Weiser, 1999)

34 Although by now software is often free, the use in the overall context is not: computer, Smartphones, internet access and electricity as minimum requirements are not affordable for all people and are not accessible everywhere. This exclusion aspect of digitalization shall only be mentioned for the sake of completeness. The physical aspect of software use is a great challenge for the expansion of businesses like Facebook and Google. How they deal with it makes the backdrop for their projects about free hardware and internet access.

35 In the US, the acquisition of profile data from so-called data brokers is legal and the common practice. How this data is generated and what models have been employed to make first conclusions about the reliability, friend circle, political affiliation, etc. is mostly the subject of company secrets. (Carr, 2015; Christl & Spiekermann, 2016)

36 Studies for example with judges and teachers. (Meier, 2017)

37 70 % of all finance transactions are controlled by algorithms, according to an estimate by Mark Buchanan in his article „Physics in finance: Trading at the speed of light“ (Buchanan, 2015)

38 A typical credit report contains information about a consumer’s payment and debt history as provided by banks, lenders, collection agencies, and other institutions; this includes, for instance, the number and type of accounts, the dates they were opened, and information about bankruptcies, liens, and judgments. Consumer reporting agencies in turn provide these reports to creditors and potential creditors, including credit card issuers, car loan lenders, mortgage lenders, but also to retailers, utility companies, mobile phone service providers, collection agencies, and any entities with a court order. Experian, the world’s largest credit reporting agency, and, next to Equifax and TransUnion, one of the three major agencies in the United States, has credit data on 918 million people.(Christl & Spiekermann, 2016: 57)

39 In the case of lifehacking, it is saving time, much like a Sisyphean task. Sparrow describes lifehacking as ‘freeing yourself up for whatever you’d rather be doing’(J. Sparrow, 2014). The goal is never reached and the productivity turned into a good per se. Nick Saval connects the technology-driven lifehacking-trend in Silicon Valley with the “scientific management” of the early 20th century.(Saval, 2014) The difference being that Taylor has “hacked” the life of others, while today success has shifted to the individual, voluntary level. Sparrow speaks of an internalization of management practices by those who are managed. Saval calls it “self-Taylorizing”. (Saval, 2014)

40 The health platform dacadoo for example calls the health index it developed „the own stock price of your health“ (https://info.dacadoo.com/de/; accessed 11-11-2017)

41 For Alexander Bard and Jan Söderquist, attentiveness is a kind of currency that is going to replace money and would be transferred in scores (Bard & Söderqvist, 2012)

42 China would like to introduce a government „social credit system“, mandatory for all citizens by 2020.

43 An example from politics is the Italian 5Stelle-movement of the former comedian Beppe Grillo. It is worth noting that the political concept of 5Stelle comes from the former Olivetti manager Gianroberto Casaleggio who died last year and is sometimes called the „Italian arm of California’s cyber-culture“ (Assheuer, 2017)

44 Here as well, I will abstain from a normative evaluation, since restrictions can lead to exceptional artistic performances, as Haikus prove, and standardizations make certain forms of communication possible, like the written word.

45 I have already discussed that with the spread of software and centrally acting algorithms, the creators’ moral concept and their culture is spread as well. While in the 20th century, Coca Cola was the icon of the „American Way of Life“, today it is Google and Facebook.

46 Hence, the increasing pressure to use the real name. Most networks don’t accept fake-profiles in their AGBs anymore and try to implement it. Disclosing the name is hereby not necessary anymore. The information is sufficient for Facebook to auto-fill the name.

47 For a detailed description of data collection I refer to „Networks of Control. A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy” (Christl & Spiekermann, 2016)

48 Statistical correlations describe the “relation existing between phenomena or things or between mathematical or statistical variables which tend to vary, be associated, or occur together in a way not expected on the basis of chance alone”. But “correlation does not imply causation”. If a statistical correlation is found between two variables and it is assumed to be a causal relationship by mistake it is called a spurious correlation. (Beebee, Hitchcock, & Menzies, 2012)

49 Bad enough, if I would be met with difficulties because of my attitude towards certain topics, but what if there was an error in the computational process and I would fall into the wrong cluster? Given the networking of data brokers, this could almost immediately have negative effects. And due to the complexity of algorithms, the input that wrongly impacted my scoring can possibly not be localized or corrected anymore. In most cases, this would require human intervention and forensic investigations that only very few of the involved parties would take on.

50 See an excellent article by Lucy Suchman and Noel Sharkey (Suchman & Sharkey, 2013)

51 The idea that today we are inseparable from our Smartphones for 24 hours, we speak with our personal digital assistants like with a person who is present, we are surveilled and watched by sensors and cameras around the clock, that every user of social media runs at least one private TV-channel, was already present in the science-fiction-literature of the 1960s. It seems more plausible to claim that in the last three decades, the ideas of science-fiction-literature have been realized in ways their authors or even McLuhan could not have foreseen, as a process of co-creation so to speak. Today, the impact of science fiction is still significant in the software- and hightech-domain (Greenfield, 2017).

52 In this discussion, physical robots usually represent their own category and their multi-variant forms influence our perception of them and if we perceive them as living things or not. Visual, tactile, and acoustic parameters are central aspects for the acceptance by humans. However, one can argue that Smartphones can indeed be called robots already when the definition applies that they save us work. The common definition includes the robot’s ability to handle physical things. (https://ifr.org; 11-13-2017)

53 Desktop-systems with screen and mouse or touchpad require a much more sophisticated act of abstraction and in principle the experience with a „real “ desk.

54 Airbus Cockpits are equipped with a so-called sidestick that does not offer any tactile feedback – the effect of the navigation by the pilot is only readable on the instruments. This corresponds with the logic and the reality of technical processes: The sensors of the sidestick translate the pilot’s movements into digital navigation impulses that are interpreted by the board computer and relayed to the actors in motors and valves. In between there are no mechanical connectors, the resistance of which could be felt by the pilot. In contrast, Boeing has decided to build in a second digital navigation circuit that relays the executed actions by the actors to the navigation stick in the cockpit which simulates the experience of a direct mechanical feedback. At Boeing, the co-pilot also can feel the navigation movement of the pilot in his traditionally shaped navigation stick. With Airbus, the co-pilot also has to always keep an eye on the display and has no direct information at all regarding which movement the captain is making with his joystick. Many pilots prefer the Boeing version because it is simpler and more „intuitive“ in difficult situations, while others reject it for the reason to be too demanding in standard situations. The physical feedback frees the pilot from the cognitive task of interpreting the instrument display and allows him to concentrate on decisions. It remains unclear if the cognitive overload and the lack of physical feedback to pilot and co-pilot led to the crash of the Air France aircraft 447 in the year 2009 into the Atlantic Ocean, but it is a topic of discussion in the aviation community. (BEA, 2012; Carr, 2015; Ross & Tweedie, 2012)

55 Linguistically, a remarkable coincidence – the Latin roots of the word manipulation refer to „a handful” – and this is exactly the ideal measure for the format of a Smartphone.

56 A study at the University of Arkansas in Monticello claims that student test performance improves a grade when mobile phones are not allowed in the room as compared to allowing them in the room but turned off and put away in a pocket or bag. The worst results were recorded in the group where the turned-off Smartphones were placed directly in front of the student on the table. (Beland & Murphy, 2016) For similar results see also (Ward, Duke, Gneezy, & Bos, 2017: 140)

57 In German, the term „nudging“ is also used as synonym for the use of libertarian paternalistic principles. The term was coined by Cass Sunstein and the Nobel laureate Richard Thaler. (Thaler & Sunnstein, 2009)

58 Certain skill-sets (for example making a cake from scratch or the initial configuration of computer program components) are replaced with more specialized and less detailed knowledge structures (with a mix or an automated installation program).

59 See 57.

60 The term bias is more fitting because it more clearly reflects a tendency rather than a preconception.

61 As a seemingly counter-intuitive example, limiting the number of products that a customer can purchase in a store increases the actual number of sold articles. The upper limit functions as an anchor and customers make only insufficient corrections downwards. (Wansink et al., 1998)

62 Expressing something in the former currency before the Euro is a classic nudge to suggest prices are too high.

63 Time pressure („the offer is only valid 5 more minutes“), simulated shortage („only 4 more seats available“ or „Five other users evaluate the offer at this time“) and seemingly objective advice („Experience has shown that the price for this flight will increase in the next 5 days“) are textbook applications for digital nudging. In almost all situations, the user can also decide different and objectively there is no disadvantage doing so but rather advantages. However, the effort is high and the analysis of user behavior shows that the used mechanisms are definitely to the operator’s advantage.

64 A common mechanism, exploiting human tendency to avoid changes due to status quo bias and inertia bias (lethargy), is the use of standard values (Kahneman et al. 1991; Santos 2011); in a digital context, however, people often dismiss default settings (Benartzi and Lehrer 2015).

65 What seems to me the most extreme version, is the electro shock bracelet „Pavlok“. It is promoted to get rid of bad habits but it would also be conducive to learn good ones, according to the introduction in the app. The bracelet can dispense electric shocks when surfing too long, looking at the wrong sites, getting up too late, talking too much, etc.

66 The used terminology in this domain is exclusively English and translations do not exist.

67 External triggers are for example notifications or status updates that make us reach for the Smartphone. External triggers are not suitable to have a deep impact. Internal triggers are necessary here and negative emotions like boredom, stress, anger, or solitude work best because then we check Facebook, check our incoming messages or begin a game.

68 Actually, female voices and names are preferred in Europe and the US. The trust level is higher, we expect more empathy and tend to interact with them less aggressively and more respectfully.

69 The Smartphone already is quite cumbersome in comparison: pick up, unlock, start the app or click on the notification – there are already 2 more steps.

The use of Amazon’s assistant leading to a purchase at Amazon in more than 45 % of the time, although the user originally had no intention, demonstrates the effectiveness of the strategy.

70 The entire model is based on Skinner’s behavioristic model with the addition of insights from neurobiology. Critics decry the viewpoint on a person seen as a trivial machine and treating the human being like a lab rat.

71 Often with the motivation to improve service for the next round because data is more complete or new friends were invited, etc. In the case of Alexa, each interaction improves recognition and completes the digital profile of the user.

72 Whoever asked themselves what purpose a so-called Smartwatch could fulfill, realizes very soon that it can record the physical state of the user, including movements and physical habits and deducing from it conclusions regarding the emotional state. Heartbeat, skin temperature and skin conductance can be read by many models so they can also serve as fitness devices. The recognition of patterns from movements of the wrist has advanced so far that it does not only recognize if someone is standing, sitting, or laying down, but also if driving a car or the person is riding a bike. Likewise, manual activities like cooking, ironing, and cleaning can be recognized by some. Built-in GPS receivers or the integration with the Smartphone data result in a very dense representation of a person’s life.

73 All sellers of communication solutions for call centers offer emotion recognition by now. For example: https://www.aspect.com/solutions/workforce-optimization/analytics-for-speech-and-text/

74 Already in 2011, a Canadian team of scientists had a success rate of 88% recognizing six emotional states from writing dynamics (confidence, hesitancy, nervousness, relaxation, sadness and tiredness) (Epp, Lippold, & Madryk, 2011)

75 The examples are from an Oracle guide about success in gamification-projects. (Oracle, 2016) Oracle suggests using game-design-mechanisms for "greater goals and reward status" and to initiate „long-term engagements". These mechanisms include so-called quests, missions, and challenges (here it is about the completion of a number of actions that follow a certain sequence), competitions (building on rivalry to motivate users by competing against each other) and virtual economies (users have the opportunity to trade and barter by swapping their earned points for services and products).

76 An example would be flight 604 Egyptian Airways, where the pilot did explicitly override the control software.

77 There is a difference whether software is supposed to do my job and I think that it is better at it than I am, or if I engage due to a machine recommending a task. Experts reject algorithmic recommendations in their domain more strongly than lay persons do.

78 Additional examples are the confounding of advisor and advice: The doctor is preferred because he or she appears likeable and competent. Vice versa, people trust in „expert-systems“ solely for the reason of the expert-label. Repetition of interaction: There is a difference whether people are familiar with the algorithm or encounter it for the first time. (Logg, 2017)

79 “Philosophical work on theory of mind considers how people infer intentionality and beliefs in other people and even in other non-humans, such as anthropomorphization of inanimate objects” (Dennett, 1987)

80 Lanier expands the term „computationalism“ to an entire culture of software and technology affinity, especially in Silicon Valley.

81 The thought experiment „Chinese room“ by John Searle is a classical argument of criticism. https://en.wikipedia.org/wiki/Chinese_room,Searle, John (2009),"Chinese room argument", Scholarpedia, 4 (8): 3100, doi:10.4249/scholarpedia.3100

82 Even when everyone speaks of the Cloud, it is of course not ethereal but uses physical hardware in data centers.

83 For most AI-scientists it is completely irrelevant if it is about a simulation or real consciousness (Strong or Weak AI, in contrast to Strong AI by Ray Kurzweil, who hereby means any form that is more powerful than the human being), the competition in science is focused on developing powerful programs that are capable of acting intelligently.

From a pragmatic point of view, this could be primarily about splitting hair at the academic level, as often is argued with a slight disdain and bemusement when it comes to definitions of philosophical terms. But what if science could now produce „real“ intelligence with intrinsic ambitions and intentions? So-to-say as a byproduct of Convenience AI the likes of Alexa and others?

84 Julian Nida-Rümelin frames the rational decision theory as basis for the conception of rationality in his description of the process of decision-making. (Nida-Rümelin, 2001, 2005)

85 These motivating intentions can but do not have to be aligned with the maximizing of the expected usefulness.

Not my de facto intentions based on mental dispositions and convictions but rather the measured consideration of reasons make a decision and action based on a decision rational. (Nida-Rümelin, 2005: 55)

86 [86] The concept of the conscious veto also appears in neurophysiology and in psychology. In connection with studies by Kühn and Brass, a model was discussed that described that while decisions are made „unconsciously“, these decisions would be played into the conscious for „checking“ before implementing into action. Kahnemann’s model of both psychic systems contains a similar mechanism. (Metzinger, 2014)

87 Inconspicuously, the principle of non-instrumentalization by Kant could already be introduced into the software debate. Should it be true that many apps primarily generate personal data and use human behavior to generate and resell, then the human being would have boldly been made into the means.

88 „Our mental autonomy always gets lost when a certain part of our cognitive model of the self collapses temporarily – and newer research shows that each one of us experiences this many hundred times on a daily basis. If we would lose control at the level of physical acting as much as on the mental level, then watching from the outside, we would often appear like a curious mix of alert person and hyperactive sleepwalker.“ (Metzinger, 2014: 136)

89 The list of critics is long, see specifically Judy Wajcman „Automation. Is it really different this time?” (Wajcman, 2017)

90 Compare Baudrillard: In Simulacra und Simulation he postulates that virtuality and reality are merging. We would be so obsessed to make an increasingly perfect copy of the world that at one point we would not be able anymore to differentiate, then there would be no telling apart anymore, just „hyper-reality“. This path is characterized by three stages: First imitation, for example, a map, followed by production, and thereby he means photography, for example. A photo lends itself to copying and spreading, while still representing a real object. Computer and digitalization lead to the last stage, simulation.

Simulation does not refer to a real object anymore but creates its own, new reality drowning out the old. (Baudrillard, 1981)

91 In contrast to the mechanisms of representative democracy, which consciously tries to remove speed and emotions from the debate, current discussion platforms in the internet achieve the opposite due to their structure and design. Better adapted designs that make a political deliberation more likely, are conceivable and feasible.

92 “The homogeneity of the Silicon Valley creators is a more dangerous threat to the future than any perceived robotic apocalypse. Too often these purveyors of the future have their backs to society, enchanted by technological promise and blind to the problems around them. It will require more than robots to ensure that the future really is different this time.” (Wajcman, 2017: 126)

93 God did not have to move, he only needed a new name.

94 Heidegger’s criticism of technology was not coined for software or digitalization and a direct application is not unproblematic, even considering that his statements prima facie seem very appropriate, just like those by Marshal McLuhan.

95 “Infosphere is a neologism I coined years ago on the basis of “biosphere”, a term referring to that limited region on our planet that supports life. It denotes the whole informational environment constituted by all informational entities (thus including informational agents as well), their properties, interactions, processes and mutual relations. It is an environment comparable to, but different from cyberspace (which is only one of its sub-regions, as it were), since it also includes off-line and analogue spaces of information.” (Floridi, 2007: 3)

96 Each technology has produced its critics and its specific form of technology criticism. Among the most prominent examples is surely Plato’s criticism of writing in the dialog „Phaidros“, where among other things he argues that the written word would weaken the memory and would be unsuitable to convey knowledge because students would not be able to make any inquiries. The reader would fool himself into believing to have grasped something while he actually did not. A text could not be defended against unjust criticism and not focus on individual needs of the readers. (Platon, 1979: 33)

97 See Kathrin Passig „Standardsituationen der Technologiekritik“ (Passig, 2009)

98 Shit-storms and the „viral“ circulation over the internet are the most obvious hints for this.

99 The legal situation when it comes to the use of software is chaotic and, in many respects, the internet is a Wild-West-situation where anyone can raise a claim. (Lessig, 2000)

100 Digital here used in the sense of software-based, electronical interfaces, not in the sense of conditions that would only allow a „yes“ or „no“.

101 Nikil Mukerji shows in “Technological progress and responsibility” the role of context with new technologies (Mukerji, 2014), based on Julian Nida-Rümelin’s article „On the concept of responsibility“ (Nida-Rümelin, 2014)

102 „Enlightenment is the escape of the human being from his self-inflicted immaturity. Immaturity is the inability to employ one’s own mental capacity without being led by another. This immaturity is self-inflicted, when the reason for it does not lie in mental deficiency but in resolve and courage to make use of it without being led by another. (...) Laziness and cowardice are the causes why such a big proportion of people yet prefer to remain immature while nature has absolved them from external leadership a long time ago; and why it is easy for others to pose as their guardians. It is so comfortable to be immature. If I have a book that bears intellect for me, a pastor who has a conscience for me, a doctor who can evaluate the diet for me et cetera, then I do not have to make an effort myself. I do not need to think if I can just pay; others will take on the dull business for me.“ (Kant, 1784)

Excerpt out of 72 pages

Details

Title
Digital Paternalism. About Software and Its Impact on Human Decisions
College
LMU Munich
Grade
1,3
Author
Year
2017
Pages
72
Catalog Number
V465149
ISBN (eBook)
9783668935228
ISBN (Book)
9783668935235
Language
English
Keywords
Software, Algorithms, Decision, Philosophy, digital ethics
Quote paper
Markus Walzl (Author), 2017, Digital Paternalism. About Software and Its Impact on Human Decisions, Munich, GRIN Verlag, https://www.grin.com/document/465149

Comments

  • No comments yet.
Look inside the ebook
Title: Digital Paternalism. About Software and Its Impact on Human Decisions



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free