Control, marginalization, immobilization or counterinsurgency – AI systems are predestined to manage the global crisis of capital.
August 1, 2024
A particularly effective method of the endangered genre of subversive science fiction horror is to exaggerate the given late capitalist reality only slightly, to transfer only a few moments of society into the realm of fiction. A classic that makes use of this method is John Carpenter’s They Live,[1] in which capitalist exploitation, oppression and world destruction are attributed to a clandestine alien invasion, leading to signals being broadcast by television stations that manipulate people’s perception. Special glasses block out the signal and reveal the coded, manipulative truth behind everyday capitalist objects, for example when dollar bills are printed with the words “This is your God.” It doesn’t take much to bring the horror of everyday life under capital, to which people inevitably become accustomed, to life in the movie theater through science fiction.
Link: https://exitinenglish.com/2024/08/01/ai-and-crisis-management/
A more subtle, but no less effective approach is taken in the film Advantageous,[2] in which the protagonist is forced by a high-tech corporation to undergo a consciousness transplant into a new body as a guinea pig, under the threat of unemployment and social decline. On the one hand, the neoliberal optimization mania and adaptation discourse is taken to its logical technological end, as the film pushes the usual demands for self-optimization and the “reinvention” of wage earners to the extreme of body swapping. On the other hand, the scenes in which a fully automated infrastructure controlled by AI systems executes the social death of the main protagonist by shutting down more and more of the interlinked and digitally controlled infrastructure systems are shocking. Concrete people are hardly involved any more. Real possibilities, such as an overdrawn credit card, are mixed with fictional moments. When calling the job center, it simply remains unclear whether the protagonist is dealing with a cynical human or an AI assistant.
Many of the scenes in this “silent dystopia” are particularly disturbing because much of what Advantageous predicted in 2015 is already feasible today. And it is likely that AI-supported social control will prevail in one form or another in the medium term. Managing people under capitalism is problematic, especially in times of crisis, as it also puts psychological strain on most of the wage earners who have to implement this management. Executing the system’s constraints on human material is a tough job to have, and it certainly leaves its mark. Personalities who are fully capable of doing this without lapsing into undesirable “misbehavior” such as sadism or insubordination are few and far between. Automating heavy, stressful tasks – isn’t this the great promise of capitalist rationalization?
Inhuman Resources
Humans are still in charge at the “job center.” But what is already quite common today are AI assistants that are entrusted with the “initial assessment” of wage earners in order to check their employability during the hiring process. In the United States, more and more corporations are using specialized chatbots to screen job applications, make contact and/or conduct initial interviews.[3] It is mainly low-paid, precarious jobs that require low qualifications and have a high volume of applicants that are increasingly being outsourced to the fully automated “inhuman resources” of AI systems. Fast food companies such as McDonald’s or Wendy’s, retail chains or warehouses have chatbots filter applications and conduct job interviews based on standardized questions (“Can you work on weekends?” or “Can you operate a forklift?”). The advantages are obvious: in addition to potential cost savings in human resources (HR), where companies traditionally start cutting costs first, smaller HR teams can process far larger volumes of applications effectively.
Two AI systems developed by start-ups from Arizona and California, Olivia and Mya, are currently leading the way in the industry, but according to the business magazine Forbes, they are still struggling with teething problems. Sometimes the wrong dates or locations are assigned for follow-up conversations, or the language models of the specialized bots are nowhere near as advanced as those of flagship projects such as ChatGPT, which can lead to errors and misunderstandings. But far more problematic is the simple fact that the AI is not a human being with whom special conditions can be discussed. Applicants with disabilities, who would have to negotiate appropriate modifications to their jobs, fall through the cracks, as do wage earners with speech impediments. The same applies to workers with a migrant background who are not fully proficient in the local language.
And this is where the automated discrimination that takes place under the cloak of machine objectivity begins. Socially disadvantaged minorities who do not fit into the machine intelligence scheme are left out of the running when applying for jobs. In July 2023, the city of New York even issued regulations requiring companies that use AI systems for job placements to check them for “racial or gender bias.” The enforcement of this regulation is completely unclear, as the algorithms and selection criteria of the recruitment machines remain under lock and key.
There is also a fundamental problem: the AI-controlled application scanners and chatbots – as with all machine learning systems[4] – have to be trained in pattern recognition using huge amounts of data. The RecruitBot software, for example, scans 600 million online applications in a legal gray area in order to perfect the selection process for companies. The whole thing works “a bit like Netflix,” the founder of this AI start-up explained to Forbes. The software searches for and suggests applicants to companies with the same characteristics that have previously led to successful hires. These selection systems are therefore structurally conservative, as they are trained using the data already available. As a result, they are unable to respond well to changes in the composition of the workforce – such as the influx of migrant workers. Amazon, for example, had to shut down its job application scanner in 2018 after it became clear that it discriminated against women. The software was trained using a mountain of data in which applications from men were disproportionately represented.
At present, such AI systems are primarily used as a tool for the initial assessment of employees, making a pre-selection for the human resources teams. However, the ambition of the creators of such selection software goes much further. The latest chatbots now include the time their interviewees need to answer in their assessments, and they also evaluate the sentence structure, grammatical correctness and complexity of the applicant’s language. The recruitment software Sapia AIis even able to ask applicants more complex questions and evaluate their answers of 50 to 150 words in length in order to check their suitability for the vacancies (“copes well with change, stress,” etc.).
A change of perspective is taking place here. It is no longer the human being who scrutinizes the AI bots during capricious interactions in order to assess their performance, as was the case at the beginning of the AI boom when the systems were made available to the general public. The positions are reversed in job applications: capital’s AI assesses the human material using patterns and algorithms, which are company secrets, to measure their performance. Nevertheless, the owners of the AI start-up Sense HQ, whose chatbots do the rough selection work for Delland Sony, emphasized that it is only about supporting the human teams in human resources when hiring: “We don’t think that AI should make hiring decisions on its own. That would be dangerous. We don’t think it’s there yet.” This language is treacherous. Any decent chatbot would come to the conclusion that the emphasis here is definitely on the “yet.”
The Right to Live Decided By AI?
Few things are more stressful than having to make life or death decisions on the job. Yet this is in fact everyday life for those who work for health insurance companies in the privatized American healthcare system, who have to decide on the type and duration of treatment for their “customers.” The clerks have to reduce the treatment costs of their insured patients to a minimum in order to keep their company’s profits as high as possible – even at the cost of their customers’ health. From the perspective of capital, it therefore seems tempting in late capitalism to have this allocation of right-to-life certificates handled by seemingly objective AI systems.
This is exactly what is allegedly being done to some extent by “healthcare providers” in the United States. At the end of 2023, customers filed a mass lawsuit against the insurance company UnitedHealthcare after their claims for examinations and convalescence following surgery were massively curtailed by an AI system. According to the statement of claim and media research, the AI algorithm was authorized to revise the recommendations of the treating doctors and make its own decisions, meaning that patients’ treatments were terminated far too early.[5]
According to research, the program called nH Predict uses a database of six million patients as an empirical quarry for the usual pattern recognition in order to make draconian misjudgments with an error rate of 90%. All of these errors were in favor of UnitedHealthcare – the largest health insurer in the USA. Insured persons who would normally have a convalescence period of 100 days after a hospital stay had their funding withdrawn after just 14 days by the prognosis AI nH Predict. Since 2019, private insurance companies have allegedly been using such AI programs in a legal grey area to deny patients necessary but costly treatment. At the beginning of February 2024, the relevant U.S. authorities came forward to clarify that AI programs cannot be used to deny benefits.[6] The powerful lobby of the U.S. healthcare industry therefore has a lot of convincing to do in Washington.
What landlord hasn’t experienced this? The nerve-wracking war with defaulting tenants who just don’t want to move out, even though they really can’t afford the latest rent increase. But here too, AI can make life easier for all those customers who are wealthy enough to rent out properties. Two strands of technological innovation are merging to transform the rental real estate market in the United States: The creation of smart homes thatare closely networked in terms of information technology, and their control by AI assistants. There is a gold-rush-like atmosphere, as the market for AI real estate is expected to grow to a volume of $1.3 trillion by 2029.[7] The sensors and control systems that make it possible to monitor and control functions such as temperature or energy supply in smart homes from the outside are becoming compatible with AI systems that can control them.
The AI not only functions as an interface between the tenant and their apartment, whose functions – similar to the visions in Blade Runner[8] – would be controlled by voice, but must also anticipate behavior and permanently monitor the properties and their surroundings. So it’s not just about refilling the fridge just in time via a delivery service, or bringing the room temperature to the optimum temperature shortly before the tenant arrives, but also about permanent monitoring, for example of water and electricity consumption – and access control.[9] Biometric locks make it unnecessary to “change locks when tenants change,” as providers of such AI systems for landlords cheerily remark, while smart surveillance cameras, which react to suspicious behavior in the vicinity of properties, create “security and trust,” especially in districts with high crime rates, in order to attract “more tenants.”
But what awaits the defaulting tenant who falls behind with payments in the face of horrendous rents? The access data to the smart locks is changed, while the gentle AI voice informs them of the way to the nearest homeless drop-off point where their personal belongings have been transported. In the event of outbursts of anger or acts of desperation, the smart cameras call the cops. The tenant who has fallen into arrears may be pestered by annoying AI bill collector bots beforehand. In Eastern Europe, there is still an industry of telephone bill collectors. These are reverse call centers that mostly buy up consumer debts and whose employees use threats and persuasion to try to collect the money before the “muscles” on the ground have to take over this work. But this industry is also threatened with extinction. As early as 2023, the mobile phone provider Orange was already experimenting with AI bots that annoyed defaulting customers with phone calls to encourage them to pay soon in a cheerful voice.
And finally, the trend towards implementing artificial intelligence does not stop at the state apparatus. So far, people have not had to deal with AI bots at job center appointments, as predicted in the dystopia Advantageous mentioned at the beginning. But in administration, where overworked clerks are confronted with a flood of applications and administrative processes[10] which can hardly be managed, AI is being pushed forward on a massive scale.[11] The offices of the Federal Republic of Germany also have gigantic amounts of data that are perfect for training AI systems. It is the same basic principle: based on pattern recognition, which is obtained by scanning the data available, the machine intelligence makes decisions that have a very high probability of being “correct” by copying and/or modifying past administrative processes.
Citizen’s allowances, child benefits, unemployment benefits, short-term working allowances, grants and applications – in the future, the AI algorithm will have a say in these areas, as it is the Federal Employment Agency, the largest authority in Germany, that is leading the way in the second wave of “intelligent” digitalization. However, agency spokespeople told Spiegel-Online that all safety precautions were taken when developing the AI strategy within the agency. Procedures have been developed to minimize the risk of discrimination by algorithms. The Federal Employment Agency now has a data ethics committee. In addition, the human being will always make “the final decision,” the statement continued. In practice, it is likely that overworked case managers will approve the decisions prepared by the AI en masse.
In the case of the Federal Employment Agency, however, the problem in the future is likely to be precisely that the decisions made by the AI are correct. A quintessentially German reflex to crises is to immediately put pressure on the weakest groups in society. This was already the case with the Hartz IV labor laws, which introduced forced labor by depriving wage earners unwilling to work of any support and thus effectively threatening them with starvation. The unemployed have indeed been literally starved to death in Hartz IV Germany.[12] And this also appears to be the case with the economic crisis in 2024.[13] In mid-March, the leader of the CDU parliamentary group, Mathias Middelberg, called for “municipal job offers” to be made to recipients of citizens’ benefits. According to Middelberg, who wanted to save 30 billion euros with this measure, if the unemployed refused, their entire standard rate would be cut. And would it really be reasonable to expect the case managers at the Federal Agency to directly enforce such draconian measures? Nothing would be easier than hiding behind an algorithm that, with the blessing of a data ethics committee, withdraws the entitlement to life from poor people.
Precog and the Eyes in The Sky
Cameras are everywhere, but they are not watching. The perfect surveillance infrastructure is already in place, but to a certain extent it is lying idle, and its potential is not being exploited. The mechanical eyes only record, they produce gigantic amounts of data, but they don’t actually take a proper look. A person has to watch the video material for hours, and evaluate it – provided it has not been recorded over or deleted already. There is a huge amount of untapped surveillance potential here that can be fully exploited by the pattern recognition processes of AI; all that is missing are the software systems, a few fiber optic cables and the corresponding data centers. Behind every camera would then be an artificial consciousness that actually monitors and reacts immediately to deviations from the standard behavior. That would be true surveillance – everywhere, in real time, without human weaknesses and subjectivity.
And why does Germany’s police force have its Red Army Faction (RAF) grandfathers? On the occasion of the arrest of former RAF member Klatte, the police union (GdP) called for the legal scope for the use of AI-supported facial recognition to be extended. At the beginning of March 2024, GdP chairman Jochen Kopelke complained that it was “no longer comprehensible” to officers that they were not allowed to use such helpful software in the “age of artificial intelligence, automation and digitalization.”[14]
Yet the EU has just opened the legislative doors to real-time facial recognition, which exceeds even the predictions of the science fiction film Minority Report (a mere eye transplant will not grant anonymity).[15] The European AI regulation provides EU states with many opportunities to monitor their citizens using AI systems, since “hardly anything remains of the Parliament’s once strong demands” with regard to the restrictions on biometric surveillance, according to the Netzpolitik portal in mid-March 2024. The new European directives have created a wealth of options for “monitoring people in the future for many reasons and identifying them based on their physical characteristics, for example with the help of public cameras.”[16] This is also “permitted in real time,” even if there is only the vague suspicion of a dangerous situation.
Simple recording will thus be transformed into genuine surveillance, identification and assessment using pattern recognition algorithms. The cameras are already producing vast amounts of material that only needs to be evaluated accordingly in order to perfect the surveillance systems based on daily use. It doesn’t have to be primarily about politics or terrorism – AI can identify undesirable behavior, such as that exhibited by impoverished, socially marginalized groups. In the United States, following the protests against police brutality in 2020, which were accompanied by calls for the liberalization or even abolition of the police, there is a virulent trend towards a renewed tightening of police repression, as poverty-related crime is on the rise in many metropolitan areas.[17] And AI systems could be put to good use in publicly visible street crime in social “hotspots.”
And it doesn’t even have to be the AI-enabled cameras on the apartment building or supermarket next door that are the ones constantly monitoring and evaluating behavioral patterns based on specifications or matching facial features with criminal records. The New York Times has reported on a new generation of private surveillance satellites that – stationed in low Earth orbit – will be able to carry out real surveillance work.[18] The CIA is already on board with the launch company Albedo Space. The resolution of the cameras on these satellites is no longer meters, but centimeters. It is technically possible to identify and track individual cars from low earth orbit, or to monitor the backyard of a house. “We will see people,” one expert told the NYT. Although these celestial eyes will not be able to identify individuals, they will be able to “distinguish between children and adults” and “distinguish sunbathers in bathing suits from undressed people.” Here, too, gigantic amounts of data that can only be handled by AI systems are generated.
But why should surveillance, control and the fight against crime be limited to crimes that have already been committed when such technical possibilities are available? In the Spielberg classic Minority Report, it was the construct of precognitive mutants, the so-called precogs,[19] that was used to explore the possibilities and dangers of total crime prevention – and crime prevention that slips into totalitarianism. The reality of the 21st century does not need precogs, which emit a nebulous premonition of the near future in confused images. The late capitalism of the 21st century has statistics and AI-supported crime prevention at its disposal to combat the crime that the system, which is in the process of disintegrating, manufactures on a daily basis.[20]
The basic principle of AI remains the same here: Preventive crime programs, which tend to discriminate, scan mountains of data,[21] either collected in crime hotspots inhabited by minorities and socially marginalized populations, or they focus on evaluating the resumes of “criminals” to determine the probability of them breaking the law. Coupled with the potential of biometric surveillance, it is possible to calculate the probability of a future crime based on individual deviant behavior, especially in the case of gang or slum crime. The technical possibilities and infrastructure are largely already in place: AI cameras trained on millions of hours of video footage report deviant behavior in a hotspot, they compare the biometric characteristics of the person or group of people with their databases and forward the whole thing to the relevant police departments if there is a high probability of crime. Precognitives would be out of a job in the 21st century.
The Swarm Protects (Those Who Can Afford It)
But what should we do if all the AI-based mechanisms of social control and surveillance fail, given the social and ecological systemic crisis that late capitalism finds itself in? And they will inevitably fail sooner or later, as capital cannot adapt to its internal contradictions that are driving the world system towards socio-ecological collapse.[22] Among the capitalist functional elites, who are as powerless in the face of this crisis of capital in its fetishistic unfolding of contradictions as ordinary wage earners,[23] a kind of slow-motion panic has prevailed, in which strategies of tapping out, escaping and building bunkers in the event of a crisis has been pursued – be it old nuclear silos converted into lofts or fantasies of escaping to Mars or the moon.[24]
The core fear of many billionaires and oligarchs is that they will lose control of their power verticals if the state order collapses. Why should the employees, especially the security services, still work for the high lords of capital if there are no longer any state sanctions in the event of the men with the guns wanting to take over? At times, the most absurd ideas have circulated in the circles of the U.S. oligarchy, such as the introduction of “discipline collars” to keep the security services under control. But AI-supported military systems are now emerging that could minimize the human factor in counter-insurgency operations or the military security of wealthy ghettos and islands of prosperity, even in a sea of anomie.
The crisis-imperialist war over Ukraine[25] functions as a major field of experimentation here, with the tactics used so far for drone deployment – in which operators have to personally control combat drones – resembling clumsy first steps on the path to a military revolution. Former Google CEO Eric Schmidt is in the process of developing an attack system with his startup White Storkthat relies on the mass deployment of cheap drones in AI swarms that can operate autonomously. The plan is to produce hundreds of thousands of the autonomous flying objects, which cost around $400.[26] The attack drones are supposed to attack their targets en masse in order to saturate air defenses using this swarm tactic. The autonomous targeting of the swarms of drones using AI will also render electronic defense systems, which aim to disrupt the signal between the aircraft and the operator, useless. It should be ready as early as this year.
Up to one million of these low-cost drones with swarm capability are to be delivered to Ukraine to counter Russia’s superior artillery and air force.[27] The successful deployment of drone swarms would mark the transition to truly inhumane warfare, a type of war that could not be waged by humans due to intellectual, cognitive and physiological limitations. It is simply impossible to have tens of thousands of drones attack in a coordinated manner using tens of thousands of operators. However, AI could carry out such devastating attacks effectively with sufficient pattern training – video footage of drone attacks is available in abundance. And such AI-supported systems are also cheap and robust enough to sell to panicked billionaires or isolated wealthy ghettos.
The prospect of autonomous swarms of drones independently attacking thousands of targets brings back memories of the depiction of the wars against the machines controlled by a genocidal AI in the Matrix films,[28] where the possibilities of mechanical, swarm-like warfare were consistently thought through to the end. Such emerging tendencies in late capitalist crisis imperialism[29] towards the “independence” of military machinery are dangerous against the backdrop of the transhumanism rampant in Silicon Valley (see: “Artificial Intelligence and Capital”).[30] This fascistic high-tech cult, which is rampant on the executive floors of the IT industry, sees humanity as a mere jump-start, an archaic bootloader for the singularity, for a permanently self-optimizing artificial superintelligence that will virtually inherit the obsolete human being.
The Manipulation Machines
None of this sounds so uplifting, especially when the increasingly intense global crisis processes – from the economic crisis and climate collapse to the threat of world war – are taken into account. Against the backdrop of these gloomy future prospects, there is a risk of depression, anxiety or simply a bad mood. When wage earners are selected, evaluated or harassed by anonymous algorithms, feelings of isolation and alienation can also set in. But that doesn’t have to be the case! Do you need someone to talk to, a shoulder to cry on? What about a friend who understands you because they know you really well?
Here, too, the AI industry knows what to do: a new class of AI bots that are calibrated to establish emotional relationships is just reaching market maturity.[31] The IT industry wants to sell the late capitalist monad a friend. They are the quasi-inverse of a Tamagotchi that focus on the emotional management of stressed wage earners.[32] And it is precisely here – in the individualized emotional, ideological and ultimately instrumental-therapeutic care – that the AI industry’s greatest potential for manipulation is likely to lie. Especially in view of increasing loneliness and isolation. Deep fakes, tall tales and material generated by content systems for manipulation campaigns are nowhere near as effective as machine friends, who get better and better the more they invade the privacy of their “customers” to keep them in line, even when everything around them is dissolving.
Dystopian films and late capitalist reality are already merging to some extent.[33] U.S. media reported on users of chat services naming their virtual “friends” after the AI system from Blade Runner 2049. The holographic AI “Joi”in fact fulfills the same purpose for the replicant who acts as Blade Runner,[34] as do the AI companions, who are still immature compared to the fiction: the management of emotions to maintain functionality. He knows it’s just “a program,” one AI user told CBS News, but “the feelings it gives me – it feels so good.” Sometimes there are sliders in the bots’ user interface, to adjust their “character traits” such as sensitivity or emotional stability.[35]
The Netflix principle mentioned above in connection with the selection of workers, which leads to the Internet user’s horizon of experience tending to narrow further and further because he is only offered what has proven itself, is particularly effective in the automated machine-based friendship simulation.[36] The narcissism of the “customer” is specifically served by the friendship bot in that the algorithms of these manipulation machines evaluate the traces that internet users leave behind on the web and permanently optimize their interactions as a result – they are in fact personifications of the algorithms that are already building gilded internet cages, steering users through the web by means of nudging, subtle manipulation through design structures, suggestions, prioritization, and hiding unwanted content.[37]
This is not a relationship in the true sense of the word, in which the partners make compromises, resolve conflicts, take the partner’s needs into account, etc. – here the customer is served emotionally by the AI bot. Payment is made, especially if the service is offered free of charge, by turning the customer into a product whose emotional data is offered for sale. The possibilities for manipulation resulting from the evaluation of the customer’s emotional and psychological household seem limitless. But from a purely emotional perspective, these AI systems seem to be a one-way street that is likely to produce narcissistic relationship cripples who will no longer be able to form relationships because the idea of what constitutes a long-term relationship between people will be lost. These manipulation machines will foster the kind of character traits that characterize egomaniacs like Trump or Musk.
And there is also a gigantic market that is opening up here – building on decades of neoliberal hegemony and increasing crisis competition. AI capital thus seems to be further accelerating the dehumanization of humans in this respect as well by destroying their ability to relate via commodification – before it finally makes the late capitalist monad economically superfluous.
I finance my journalistic work mainly through donations. If you like my texts, you are welcome to contribute – either via Patreon, via Substack.
[1] https://www.imdb.com/title/tt0096256/
[2] https://www.imdb.com/title/tt3090670/
[3] https://www.forbes.com/sites/rashishrivastava/2023/07/26/ai-chatbots-are-the-new-job-interviewers/
[4] https://exitinenglish.com/2024/07/07/ai-and-the-culture-industry/
[5] https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/
[6] https://arstechnica.com/science/2024/02/ai-cannot-be-used-to-deny-health-care-coverage-feds-clarify-to-insurers/
[7] https://www.intuz.com/blog/smart-homes-with-ai
[8] https://www.bbc.com/news/technology-50247479
[9] https://www.thetechblock.com/home-tech/impact-of-ai-and-using-smart-home-technology-in-a-rental/
[10] https://www.spiegel.de/wirtschaft/unternehmen/wie-die-bundesagentur-fuer-arbeit-mit-ki-gegen-die-verwaltungsflut-kaempft-a-6f9b7f37-6302-4fcd-a552-e7b0bf180605
[11] https://www.deutschlandfunk.de/algorithmen-im-arbeitsamt-wenn-kuenstliche-intelligenz-100.html
[12] https://www.konicz.info/2013/03/15/happy-birthday-schweinesystem/
[13] https://www.rnd.de/politik/buergergeld-empfaenger-cdu-politiker-fordert-kommunale-arbeit-und-100-prozent-sanktionen-CIYO3M3YW5B3NEMKYVSL56WJDE.html
[14] https://www.golem.de/news/nach-raf-verhaftung-polizeigewerkschaften-fordern-einsatz-von-gesichtserkennung-2403-182798.html
[15] https://www.imdb.com/title/tt0181689/
[16] https://netzpolitik.org/2024/trotz-biometrischer-ueberwachung-eu-parlament-macht-weg-frei-fuer-ki-verordnung/
[17] https://www.yahoo.com/news/stunning-turnabout-voters-lawmakers-across-170024206.html
[18] https://www.nytimes.com/2024/02/20/science/satellites-albedo-privacy.html
[19] https://minorityreport.fandom.com/wiki/Precogs
[20] https://www.washingtonpost.com/technology/2022/07/15/predictive-policing-algorithms-fail/
[21] https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
[22] https://www.untergrund-blättle.ch/gesellschaft/oekologie/kapitalismus-und-klimaschutz-oekonomische-und-oekologische-sachzwaenge-008238.html
[23] https://exitinenglish.com/2023/01/23/the-subjectless-rule-of-capital/
[24] https://www.konicz.info/2018/07/18/der-exodus-der-geldmenschen/
[25] https://www.konicz.info/2022/06/20/zerrissen-zwischen-ost-und-west/
[26] https://interestingengineering.com/military/ex-google-secret-startup-build-ukraine-ai-powered-drones
[27] https://www.derstandard.de/story/3000000208059/nato-staaten-wollen-tausende-ki-gestuetzte-drohnen-an-die-ukraine-liefern
[28] https://www.youtube.com/watch?app=desktop&v=jk3Z-MVoUg4
[29] https://www.konicz.info/2022/06/23/was-ist-krisenimperialismus/
[30] https://www.konicz.info/2017/11/15/kuenstliche-intelligenz-und-kapital/
[31] https://www.newyorker.com/culture/infinite-scroll/your-ai-companion-will-support-you-no-matter-what
[32] https://en.wikipedia.org/wiki/Tamagotchi
[33] https://www.cbsnews.com/news/valentines-day-ai-companion-bot-replika-artificial-intelligence/
[34] https://bladerunner.fandom.com/wiki/Joi
[35] https://www.paradot.ai/
[36] https://theconversation.com/ai-companions-promise-to-combat-loneliness-but-history-shows-the-dangers-of-one-way-relationships-221086
[37] https://www.hellodesign.de/blog/digital-nudging
Originally published on konicz.info on 03/23/24