Today, the application of algorithms is becoming more and more popular, and an algorithm society is coming. While enjoying the convenience brought by algorithms, people also face the risk of becoming a “prisoner” of algorithms in some respects. Human cognition, judgment, and decision-making may be subject to algorithms, and people’s social position may also be imprisoned due to algorithmic bias, discrimination, and other reasons.
On some digital labor platforms, algorithms are implicitly controlling labor. Algorithms, big data, and other new technologies may also enhance human monitoring. But reflecting on the “prisoner” risk of the algorithm society does not mean rejecting the algorithm. Only by strengthening the technical rationality and algorithm ethics training of algorithm developers, and improving the algorithm literacy of algorithm users can we better deal with the challenges of the algorithm society.
This article is from WeChat official account:Global Media Journal (ID: GJMS2014) , author: Peng Lan (people’s University of China Press and Social development Research Center, Renmin University of China School of Journalism professor), from the title figure: vision China span>
In recent years, the journalism and communication circles have paid more and more attention to algorithms, but most of the research focused on the relationship between algorithms and information dissemination, especially information cocoon rooms. However, with the development of big data and artificial intelligence technologies, algorithms have begun to be comprehensive Entering and affecting our lives, from a certain perspective, an algorithm society is coming. Therefore, the research on algorithms also requires a broader perspective.
In the opinion of algorithm experts, an algorithm is “a limited, definite, effective, and suitable for computer program to implement a problem-solving method, which is the foundation of computer science” (Sedgewick & Wayne, 2012). In layman’s terms, an algorithm can be viewed as a set of instructions or schemes based on data analysis and oriented to a specific goal, implemented by a computer program. The algorithms closely related to people’s lives today are also diverse, including personalized recommendation algorithms, decision-making algorithms, and variousThe algorithm of a platform, the governance algorithm under various management objectives, etc.
The widespread application of algorithms is inevitable, and we must also face the challenges and risks brought by algorithms. From an individual perspective, while enjoying the conveniences brought by algorithms, will they be restrained by algorithms, or even To be a “prisoner” of algorithms requires thinking and vigilance.
1. Will people’s cognition, judgment and decision-making be subject to algorithms?
In essence, an algorithm is an intermediary. It builds a data-based interface between people and the real world based on a calculation model under a specific goal. Therefore, it will understand people’s cognition and Judgment and decision-making on this basis have an impact.
1. The impact of recommendation algorithm on people’s cognition
Although personalized recommendation algorithms have attracted attention due to the rise of some algorithm-based content platforms in recent years, in fact, personalized recommendations have entered the Internet a long time ago, and search engines and e-commerce platforms have already adopted recommendation algorithms.
Currently, the main personalized recommendation algorithms include content-based recommendation, collaborative filtering recommendation, tag-based recommendation, social recommendation, recommendation based on deep learning, recommendation based on knowledge, recommendation based on network structure, hybrid recommendation, etc.
Content-based recommendation refers to recommending other objects with similar attributes according to the object selected by the user (Xu Hailing et al., 2009). In personalized recommendation, this is a common recommendation mechanism. Collaborative filtering recommendation (collaborative filtering) is to recommend resources based on the similarity of user interests, and provide the current user’s opinions similar to the current user Users, so that users can find new resources of interest (Xing Chunxiao et al., 2007). The label-based method analyzes the user’s label preference and the label characteristics of the item, and recommends items for the user based on the similarity of the two. The essence of the method is to form a user-tag-item ternary relationship by introducing tags.(Kong Xinxin et al., 2017).
In recent years, the social recommendation that has received attention is mainly to build a social relationship network between users based on the social relationship information between users, and recommend to users based on this social relationship and known user interest models (Meng Xiangwu et al., 2015). Recommendation algorithms based on deep learning may be more “smart”, such as predicting the user’s next action by modeling the user’s historical sequence behavior, mining the user’s background information for more comprehensive recommendations, etc.(Liu Junliang, Li Xiaoguang, 2020). It can be expected that there will be more new ideas in future recommendation algorithms.
The reason why recommendation algorithm has become a widely used technology on the Internet today is that its core motivation lies in solving the problem of supply and demand adaptation between massive amounts of information, products and users. For users, it is for them to find information or products that meet their needs; for producers, it is to find suitable users for content or products.
From the perspective of content or product recommendation, the intermediary algorithm itself is to provide users with a filter. While reducing the user’s cognitive burden, this filtering may also limit the user’s field of vision.
The American scholar Paritzer has paid attention to the problem of information “filter bubbles” brought about by the personalized recommendations of search engines for a long time. He pointed out that personalized filters will break our cognitive balance between strengthening existing ideas and acquiring new ideas in two ways:
First, it fills us with ideas that we are already familiar with (and have been recognized), leading to our Thinking frame is too confident;
Secondly, it removes some key reminders from our environment that stimulate our desire to learn(Paritzer, 2020, p.65 ). He also believes that personalized recommendations limit our solution horizon, that is, the size of the space to find solutions to problems, and also limit people’s creativity(Paritzer, 2020, P.72).
The domestic researchers are more from the perspective of the information cocoon room to explore whether the algorithm will bring people’s narrow vision, attitude and position solidification and other issues. But the judgment of the researchers is not consistent. Although many researchers worry that the algorithm will create an information cocoon, some researchers believe that the information cocoon is not a scientific concept, and it is not yet possible to prove the exact relationship between the algorithm and the information cocoon.
Whether it is expressed with a filter bubble or an information cocoon room, and whether the relationship between the algorithm and the information cocoon room is confirmed, at least we can see that from the principle of the algorithm itself, the algorithm It does bring about filtering, which will inevitably affect people’s perception of the external environment to a certain extent.
In today’s information explosion, information filtering is an inevitable choice. Even traditional media will filter information and build a mimic environment on the media. If the media has a reasonable assessment of the publicity of the content, if the media adheres to the position of “objective” and “balanced”, then the mimicry environment of the media can still help people fully understand the real society. When professional media filter information, the key consideration is not what people want to know, but to judge what people should know from the perspective of social environment communication and perception. The choice of media information usually also takes into account the balance and diversity of the content.
However, in the current algorithm design, the content recommendation algorithm is mainly based on people’s habits and the interests of similar people, that is, paying attention to what people want to achieve, and in a sense, it is in conforming to people’s cognitive psychology. On the inert side, compliance may even strengthen people’s selective psychology. Although from the perspective of information acquisition efficiency, such an algorithm can help people obtain information that is more consistent with their own preferences and needs at a lower cost, but whether the recommendation algorithm can only comply with people’s inertia and willingness is a question worth pondering. .
An important goal of information dissemination is social integration, facilitating the connection of different groups of people, and promoting public dialogue, which must break the cocoon of individuals. Therefore, the content recommendation algorithm also needs to take into account the dual goals of personalized satisfaction and public integration, which should also be the direction of the algorithm’s future efforts.
In addition to the impact of personalized recommendation algorithms on individuals, content recommendation algorithms will also create an overall mimic environment on their platforms. Whether this mimic environment can fully and truly reflect the real society is also related to the design ideas of the algorithm. However, today’s platform algorithms rely too much on the flow of thinking, which is likely to bring about the Matthew effect, aggravate the imbalance of the information environment, and increase the deviation between the mimic environment and the real environment.
Technical circles’ evaluation indicators for recommendation algorithms mainly include: accuracy (user satisfaction with recommended content), ranking rationality, coverage, diversity (Diversity among users and diversity within users) and novelty, etc.(Zhu Yuxiao, Lu Linyuan, 2012). The evaluation indicators currently proposed are more for the product recommendation system in the e-commerce platform, so accuracy is put in the first place. This accuracy mainly considers the consistency with the user’s inertia and needs.
However, the evaluation of content recommendation algorithms still lacks corresponding standards. Although researchers in the field of humanities and social sciences call for “professional values” to be reflected in the design of content recommendation algorithms, it is still a big challenge to embed these values in the algorithm.
Perhaps diversity needs to be put in a more important position in the measurement standards of content recommendation algorithms. This diversity is not only the diversity of content themes, but also the diversity of attitudes, the diversity of content producers, and so on. This is also a concrete manifestation of the objective and balanced professional values of journalism.
Content recommendation algorithm,On the one hand, it is necessary to expand the perspective of content consumers from the perspective of content consumers of personalized recommendation. In other words, content recommendation is not only for content consumers. It should also be geared toward content producers. It is necessary to use algorithm design to allow more high-quality content produced by content producers to be disseminated, especially to enable content with important public value to reach a wider audience.
On the other hand, From the perspective of users, even if algorithms can better achieve the balance between the diversity, personalization and public content of content push in the future, if people consider the information The right to choose is completely left to the algorithm, and they just wait for the information fed by the algorithm every day, which will also cause them to lose more and more autonomy and judgment.
In addition to the recommendation algorithm, the influence of social robots on human cognition is also another manifestation of the influence of algorithms. Social bots refer to social media account clusters set up by human controllers and controlled by automated algorithm programs in social media (Zheng Chenyu, Fan Hong, 2020) .
In many social platforms, social robots automatically produce a variety of content under the control of algorithms. These contents are mixed in human-produced content and are often not identifiable by ordinary users. Therefore, through social robots, social platforms The information environment is easy to be manipulated, and this information environment will also affect users.
In today’s intelligent content creation, there are also algorithms, whether it is text reports, videos, or other works. These works are similar in appearance to those created by humans, and sometimes there is no difference, but these works are only a cognitive interface constructed by algorithms.
Although algorithms are simulating people’s creative thinking, and may break people’s thinking routines in some ways, and bring people to some cognitive fields that have not been involved in the past, the algorithms themselves also have their limitations. They They can only reflect the real world from certain dimensions, and their reflection of the world is still relatively “flat”. If we always understand the world through an algorithm-built interface, our way of understanding the world will become more and more monotonous. , Will also lose a complete grasp of the world.
2. Algorithms inhibit human judgment and decision-making ability
Many times today, we also rely on algorithms to make judgments and decisions, and even make big decisions.
For an individual, when he accepts the content and product recommended by the algorithm, he is also making judgments and decisions with the aid of the algorithm in a sense, that is, the value judgment of the content and the product is based on the evaluation of the algorithm.
Using navigation software is relying on algorithms for route judgment and selection. The future of unmanned driving will depend on the judgment of algorithms. In this regard, algorithms can indeed help people make correct or even better decisions. .
With the support of big data and artificial intelligence technology, algorithms are becoming more and more significant in the decision-making of organizations or some industries. For example, banks can refer to the results of data and algorithmic analysis when conducting credit and risk assessments on loans issued by enterprises or individuals. Companies may use algorithms to make judgments when hiring employees.
In the medical field, intelligent image analysis systems and disease diagnosis systems are helping doctors make diagnosis and treatment decisions. In the legal system, artificial intelligence is also trying to directly participate in decision-making or making local judgments, such as carrying out recidivism risk assessment, judging the suspect’s escape possibility, reasonable sentencing calculations, etc.( Li Zheng, 2020). cityTraffic and other aspects of management will increasingly rely on algorithms.
The influence of algorithms on production decisions is also emerging, and the content production industry is a typical example. The use of various rankings to determine the topic selection direction of content production or the planning of content products is a manifestation of today’s media relying on algorithms for content production, because various rankings are also formed by certain algorithms. In addition to rankings, the media can also use more complex data analysis and decision-making algorithms.
For example, the 2020 New Year’s Eve Party at Station B not only successfully attracted users of Station B, but also successfully achieved “breaking the circle”, which is related to the application of relevant data and algorithms for the selection of performance guests and performances. The core idea of the emerging computing advertising is to base the whole process of advertising production including creative, form selection, target group selection, advertising distribution and interaction on the basis of data and algorithms.
The influence of algorithms on other economic activities and business decisions is also getting deeper. For example, on the sharing economy platform (such as online car-hailing platform), producers and consumers directly carry out dynamic, changeable and complex mesh Connections and peer-to-peer transactions need to rely on powerful algorithms designed, maintained and operated by platform companies (Yao Qian, 2018). In the product development of enterprises, algorithms can be used for planning, evaluation, product pricing, evaluation of operational effects, and real-time control of operations. Algorithms also have their own unique functions.
Algorithms are important in certain aspects of decision-making, one is because it can analyze the history, current situation and even future trends of objects related to the decision.(Including big data analysis); Second, because it can establish a decision model, based on this model, analyze various possibilities and seek the best solution. Algorithms use filtering information and constructing models as means to reduce cognitive burden and improve cognitive efficiency(Jiang Ge, 2019). Therefore, algorithms may have their own advantages in terms of decision-making speed and efficiency, and even the accuracy of certain decisions.
Does this mean that people should leave all judgments and decisions to algorithms?
“The reason why people need to use algorithms to solve problems is because they need to use cognitive models to control their cognitive burden within the scope of their purpose.” (Jiang Ge, 2019)Since the algorithm is a cognitive model, it is an abstraction and simplification of the real world. In many cases, it only reflects typical objects rather than the entirety of things. . The diverse world is not all described or calculated by data. Algorithms may describe and explain the real society better at some levels, but they are powerless at other levels. Relying entirely on algorithms, sometimes errors will be formed. Judgment and decision-making.
People’s innovation today is also a process of judgment and decision-making. Although algorithms have broken some of human’s old routines, they will also form some new routines; if people’s decision-making becomes more and more trapped in algorithm-created routines, then human imagination and creativity will also shrink.
Algorithmic decision-making mainly relies on the judgment of facts, but the decision-making process often requires other judgments such as emotions, morals and ethics. Some legal researchers believe that: “Judicial adjudication is not a procedural rational calculation, but a complex of facts and values (physical) and technical applications The unity with the democratic process (procedural), it needs to use the process of fact finding and the application of law to reflect the temperature of humanity and demonstrate humanistic care “(Chen Minguang, 2020) This view can also be applied to other decision-making areas.
The ethical issues in algorithmic decision-making are even more troublesome today, and they are also important concerns for the development of artificial intelligence in the future. Some researchers pointed out that artificial intelligence should include agents with ethical influence, implicit ethical agents, explicit ethical agents, and complete ethical agents. Artificial intelligence will present a certain pseudo-agent in its interaction with people. (Duan Weiwen, 2017).
However, some scholars believe that artificial intelligence and robots cannot handle practical ethical issues in open contexts(Lan Jiang, 2018). Whether the algorithm can better solve various ethical dilemmas in the future is still unknown, but even if we hope for its ability in ethical judgment, we cannot completely hand over ethical judgment to the machine.
AlsoA more basic question is, how do we judge the reliability of the algorithm. The quality of the data and the reasonableness of the algorithm model will affect the quality of the algorithm results. Although data-based algorithms seem objective, they actually hide many subjective factors, which can also interfere with the reliability of the algorithm.
Therefore, the challenge that the algorithm society brings to people’s decision-making and judgment is twofold:
On the one hand, we must prevent all decisions and judgments from being handed over to the algorithm. We must judge whether the algorithm is In which areas can help us make better decisions, and in which areas algorithms may lead us astray;
On the other hand, even if we need to refer to the algorithm in many cases, we need to be able to judge the algorithm itself Whether there are defects, whether the data on which the algorithm is based is reliable, whether the algorithm is biased, and whether the results provided by the algorithm are reasonable and accurate. Without this kind of judgment, blindly relying on algorithms will inevitably fall into various traps.
2. Will people’s social position be restricted by algorithms?
In addition to cognition and judgment may be subject to algorithms, people’s social position may also be affected by algorithms.
This is firstly related to algorithmic bias and discrimination. Algorithm bias and discrimination may originate from the design of the algorithm itself, or from the data on which the algorithm is based.
Artificial intelligence is the reflection of human thinking. The cognitive attitude of “categorization tendency” that humans adopt when facing certain problems is also reflected in the algorithmic process of artificial intelligence.(Bu Su, 2019). And in this “categorization tendency”, there are hidden prejudices in human thinking and culture. Perhaps many algorithm designers are not aware of the prejudice or discrimination in the algorithm. They just follow the social culture or their own thinking conventions to design the algorithm. This process reveals the hidden biases in the past.
If a researcher pointed out: “HumanCulture has prejudices, and as big data with the same structure as human society, it will inevitably contain deep-rooted prejudices. The big data algorithm just summarizes this discriminatory culture. ”(Zhang Yuhong et al., 2017)
At the data level, “data bias related to algorithm bias is an inevitable product of historical data itself. When AI and others are learning and training, they are based on these biased data, and the results of their analysis naturally There is a clear bias. Algorithm programs that use historical data for training may continue or even aggravate discrimination based on race or gender”(Yang Qingfeng, 2019). When conducting data mining, different groups of people are often classified, and this classification will inherit or even increase the inequality system(Lin Xi, Guo Sujian, 2020 ).
Although there will be algorithmic bias and discrimination in recommendation algorithms, relatively speaking, the impact is mainly on people’s cognitive level, but in decision-making algorithms, bias and discrimination may be possible Affect people’s rights, social position and mobility.
Algorithm discrimination, which is currently receiving more attention, such as employment discrimination, credit discrimination, and investment discrimination, is largely related to people’s social position and mobility. “In algorithmic decision-making, individuals are given a new identity, that is,’algorithmic identity’. Once the individual’s algorithmic identity is labeled with a label that is easy to be discriminated against, it will have a double cumulative disadvantage.” (Liu Pei, Chi Zhongjun, 2019)
People are often marked by decision-making algorithms because of their own identity and original social position. People who are in a dominant position will often get favorable marks, so they have the possibility of obtaining more resources and upward mobility. Sex, and those who were originally disadvantaged are in a more disadvantaged position because of the mark, thus losing employment, obtaining investment and other corresponding opportunities.
Algorithms can track people’s historical records, correlate data from various platforms, and turn people’s backgrounds “upright”, which makes it more difficult for people to escape the shackles of existing social positions. “When an algorithm connects individuals in the virtual world and the real world, and connects the past, present, and future of an individual, a one-time injustice may conceal a structural impact on the individual.Discrimination lock-in. ”(Zhang Xin, 2019)
If there is not enough regulation, algorithmic discrimination may also be reflected in other fields such as education and medical care in the future.
In a certain sense, algorithmic bias and discrimination cannot be completely avoided. For algorithm designers, certain mechanisms, including legal constraints, are needed to minimize the occurrence of algorithm bias and discrimination; for ordinary people, they need to be aware of the existence of algorithm bias and discrimination, and how they are Have an impact on the individual.
In addition to algorithmic bias and discrimination, the confinement of algorithms to people’s social position will also be achieved in other ways. The limitation of understanding brought by the recommendation algorithm mentioned above will also affect people’s understanding of other groups to a certain extent. While the algorithm connects similar groups of people, it may also cause separation between different groups of people. In this sense, algorithms are also easy to confine people to a certain “circle” and “layer”.
Another kind of social differentiation brought about by the algorithm society is the differentiation between the poor and the rich in information technology. “The algorithmic society must be a technological elite society. A few people will become the master, and most people can only obey.” (Yu Xingzhong, 2018) In a sense, algorithms will exacerbate the digital divide. The poor in information technology are not only unconnected with algorithm power, but also trapped in their own social class under the control of the algorithm power of others.
3. Will human labor be implicitly controlled by algorithms?
In September 2020, a special article entitled “Takeaway Rider, Stuck in the System” attracted a lot of attention. The report mentioned that the takeaway platform has an algorithm. From the moment a customer places an order, the system starts to decide which rider to send to take the order based on the rider’s convenience, location, and direction.
Orders are usually dispatched in the form of 3 orders or 5 orders. One order has two task points for picking up and delivering food. If a rider carries 5 orders and 10 task points, the system will have 110,000 orders. The route planning may complete the “second-level solution of ten thousand orders to ten thousand people”, and plan the optimal distribution plan (Lai Youxuan, 2020). Although for customers, this algorithm can enable them to get takeaway in the shortest possible time;For the rider, this may mean ever-increasing time pressure and labor intensity.
Some researchers pointed out that the continuous shortening of delivery time is inseparable from the algorithm’s discipline on delivery personnel. The digital platform mediates the relationship between labor and consumption through algorithms, and wins the capital market by constructing efficient and timely time discourse. At the same time, it also implements time discipline and time control under algorithm management for delivery staff. The food delivery platform relies on the refined management of algorithms, rationalizes and standardizes the emotional labor in the traditional context, and further realizes the discipline and discipline of the food delivery staff(孙 Ping, 2019).
Other researchers have similar judgments. Studies have found that, compared to traditional labor control by employers, hierarchical control, technical control, and bureaucratic control, the labor of takeaway riders is controlled by the platform. The platform system makes the labor process accurate to the degree of calculation, and realizes a high degree of control and precise prediction of labor. This is largely due to the support of the data, algorithms and models behind the platform system(Chen Long, 2020).
Some platforms are even upgraded through algorithms, turning the original situation where the rider decides the work content and amount of work to rely on algorithms for forced distribution. At the same time, the forced allocation of the rejection algorithm will pay a hidden but more serious price-the total number of orders that can be received has dropped significantly(Ye Weiming, Ouyang Rongxin, 2020)< /span>.
Content production on digital platforms is a more direct digital labor, and this digital labor is also largely affected by platform rules. Platform rules will regulate the relationship between supply and demand and stimulate labor, and platform rules often require algorithms.
Some researchers have studied the production mechanism of online literature and pointed out that the “paid reading model” has become a common profit rule for literary websites. In order to gain the attention of readers and to get more money in return, online writers have begun to write more and more. The longer it is, writing is no longer a free and personalized self-presentation. Catering to the needs of readers and the market, maintaining readers’ reading pleasure and triggering the desire to read have become the internal logic of the production of online literary works. “Artistic youth” gradually evolved into a “pieceworker” who has no fixed labor contract and is paid according to the text of a single work(Jiang Shuyuan, Huang Bin, 2020)< /span>.
Similarly, on platforms such as short video and live video, algorithms not only incentivize users to participate in content production, but also invisibly control their labor pay and return, and even alienate their labor goals.
An important reason why the platform algorithm can directly control people’s labor is that the platform directly connects workers and consumers. Consumers can feedback and evaluate the labor of workers through the platform; the results of workers can be directly Quantification has become the main indicator for evaluating labor, and this is what the algorithm is most suitable for. By quantifying the results to control labor, it seems that the labor process of laborers has become free, but in order to obtain better labor results, they will pay more labor, including emotional labor, to please consumers in various ways to obtain Extra affirmation.
For content producers, traffic becomes the most basic evaluation indicator. Some researchers pointed out that the pursuit of flow in the digital space has brought communicative capitalism, and another value of information has become more important, which is the contribution flow. What matters is not content, but traffic. There is no substantial difference between this flow and the other flow, only the difference in the calculated size of the flow (Lanjiang, 2019).
Simplify the evaluation of the amount of information and quality of the content itself into the evaluation of traffic. This is an important change in communication in the new media era. The platform’s statistics on traffic and direct presentation in the communication interface , And various rankings and index systems based on these data have created an algorithmic communication evaluation system. This not only puts pressure on workers, but may also shift their starting point of content production. They no longer make independent judgments as content producers, but must always take into account the psychology and needs of content consumers. In particular, they need to consider the content needs of social networks as “human matchmakers”.
Although to a certain extent, consumer-oriented considerations can make content producers’ production more purposeful and respond more to market demands; on the other hand, an overly single evaluation algorithm system will also make it simple The “mouse voting” mechanism replaces the professional evaluation mechanism, and the professional judgment of the content producer may be weakened as a result.
When the influence or the volume of public opinion is measured by traffic, a kind of alternative digital laborer, the cyber navy, has emerged. The network navy, which uses fragmented time for paid digital labor,’s main job is to artificially create various data (such as reviewing points, reviewing comments, reviewing single) to influence relevant evaluations; in a sense, the network navy is a “social symptom” of the quantitative system design(Wu Dingming, 2019)< /span>. The labor compensation of a digital worker like the water army is closely related to the algorithm, and they have changed the relevant data in the evaluation system of others, which has an impact on the labor of others.
The platform may also change labor and turn some labor into “play labor” as Schultz calls(play-labor, playbor)(play-labor, playbor)(play-labor, playbor)(play-labor, playbor)
span>, to achieve the unity of productivity and leisure (Li Xian, 2020), enabling people to participate more willingly in digital labor. Even in Christian Fox’s opinion, all the users of today’s digital platforms are actually transport workers, and they are ideological workers who transport goods for large platforms and advertisers for free. They are all digital workers.(Lanjiang, 2019). Although the “laboration” of new media users is driven by multiple forces, the role of algorithms in it cannot be ignored.
The various mechanisms of the platform are embodied through data and algorithm models. The control of the platform will eventually evolve into the self-restraint and motivation of workers. Under this self-restraint and encouragement, some workers will become “perpetual motion machines”. .
In many labor fields, even if there is no platform, people have to pressurize themselves in the face of various quantitative assessment indicators, but people’s efforts will bring rising tides, escalating assessment indicators, and people’s pressure. Not only did it not reduce, but further strengthened, which is also a typical manifestation of the involution phenomenon that has attracted much attention in 2020.
Although the term involution has only become popular in recent years, the phenomenon of involution has long existed. Use a variety of simple and superficial evaluation indicators to measure the results of workers(including mental workers), which is similar to the statistics of traffic, which can also be said to be A simple evaluation algorithm, and in the era of algorithms, such evaluation thinking seems to be intensified.
Of course, there are also some platforms that try to narrow the “gap between rich and poor” in the flow of labor through the adjustment of algorithms, so that more laborers’ labor can be paid attention to and more rewards, although this does not fundamentally liberate labor.However, it also shows that there are many possibilities in the algorithm itself. Algorithms themselves are neutral. Algorithms may confine workers, but when applied properly, they can also help loosen and reduce the burden on workers. It’s just that we have not fully thought and acted on this today.
Four. Will the algorithmic society’s monitoring of people be strengthened?
With the development of artificial intelligence applications, artificial intelligence provides a data basis for the supervision of individuals and society by state power, and the “algorization of governance system” has begun to emerge(Wang Xiaofang, Wang Lei, 2019), a smart society built with the use of Internet, Internet of Things and artificial intelligence technology with “smart manager” as an intermediary system is also coming(He Mingsheng, 2020). Algorithms also play a central role in such intelligent management.
From a positive perspective, as pointed out by scholars, in social management, artificial intelligence is guiding the rigid value of (After a comprehensive technical monitoring Abiding by the rules), the establishment and maintenance of social systems, and the formulation and maintenance of the order of the life world have played an important role(Ren Jiantao, 2020 ).
Intelligent social governance can use big data, artificial intelligence and other technologies to map the complex social operation system into a multidimensional and dynamic data system, and continuously accumulate the data characteristics of social operations to deal with various social risks and improve social governance Effectiveness (Meng Tianguang, Zhao Juan, 2018), the algorithmization of social governance rules helps to improve the initiative and predictability of social governance, Let social governance entities more proactively predict, early warning and prevent social risks(Zhou Hanhua, Liu Canhua, 2020). Opacity of algorithms is not necessarily a bad thing. If used properly, opaque algorithms may also make up for social rifts and maintain social consensus(Ding Xiaodong, 2017).
However, as artificial intelligence technologies including algorithms enter into social governance, the risks of individuals subject to various surveillance and control are also increasing.
One of the foundations of algorithm management is the digitization of people. The collection of human data today not only involves data that people actively produce or provide on new media platforms, but also involves a lot of data that people passively provide. Data collection tools have also expanded to various sensors and wearable devices. Their collection of human data has entered a deeper level. Human behavioral data, physiological data, etc. have become the collection objects, many of which are unwilling to disclose or even involve individuals. Private data.
The extensive collection of human data seems to have brought some new conveniences to life on the surface. For example, face recognition speeds up people’s payment and security review, but these conveniences often hide huge risks. ; And individuals may be unaware of these risks, even if they are aware of the risks, in many cases they cannot compete with the data collection agency. Forcing people to perform various forms of digitization, using personal data to exchange various service conveniences or rights, has become a common fact in the algorithm society.
On the basis of data collection, the algorithm can further calculate the individual, thereby discovering the deeper secrets of the individual behind the data and controlling it accordingly. Algorithms control people throughout the entire process. Every activity and behavior of people may become the basis of the current algorithm, and it will also accumulate to affect the future calculation results of the algorithm. “A behavioral activity that seems inconspicuous on the surface has actually been calculated countless times by an invisible algorithm program in the background. Any choice is a choice that conforms to the algorithm. In the end, our seemingly autonomous behavior is all in the algorithm governance. Medium.”(Lanjiang, 2020)
Algorithms imply various social rules, and the scoring mechanism related to the algorithm quantifies the results of people’s execution of the rules. Therefore, algorithms and scoring often go hand in hand. “This scoring mechanism can help gather the daily activities of social subjects, concentrate the diffused social awareness and value judgments to a certain extent, form a common will and enforce it.”(Hu Ling, 2019).
In a certain sense, the scoring system strengthens people’s understanding and compliance with social rules, and stimulates people’s self-discipline. “The decision-making system for credit rating based on the data-algorithm-consequence model replaces the law-behavior-consequence model. The analysis framework of the legal constitutional elements of the social creditGeneration is derived from data extraction and is a one-way self-accountability process, not based on a two-way legal relationship”(Yu Qingsong, 2020). Based on scoring The rewards and punishments are simple, direct, and sometimes effective.
The scoring mechanism is valuable for social risk control at certain times, but it is undeniable that it may infringe on personal privacy and abuse the power of algorithm control.
As the researcher pointed out, the power of scoring may result in the compression of the space for moral and social norms to play a role, and the implementation of power will be more in-depth and dynamic.(胡凌 , 2019). Social credit supervisors can enter private territories beyond the reach of traditional power in the past, and make the perpetrator’s private space (including mental state, behavioral ethics, etc.) All are included under the supervision of the social credit system (Yu Qingsong, 2020). The reason why the “civilization code” proposed by a local government in 2020 has been questioned is also that it attempts to abuse management power and invade private spaces such as moral evaluation through algorithms.
The scoring system under the algorithm not only provides support for the monitoring of management agencies, but also provides a basis for mutual evaluation and supervision between people. The development of algorithm technology can also make everyone an observer, enforcer, and referee of the digital personality of others (Yu Qingsong, 2020). Although in some cases, this kind of scoring can provide a security evaluation basis for people’s interactions (especially transactions) in cyberspace and facilitate people to make risk judgments, but at the same time, the power of mutual scoring between users may also be abused. , Mutual monitoring between users will make individuals face greater pressure.
Not only does algorithmic governance require personal data, various companies are also collecting personal data to gain market analysis and operational foundations. For companies, algorithms are tools used to maximize profits; for algorithms, personal information and market data are energy bars(Zaraqi, Stucker, 2018).
At the same time, “algorithms and commercial capital are combined to form surveillance capitalism. Users are embedded in the data production chain and become controlled by algorithms.The object of “(Zhang Linghan, 2019). The company’s control of users is not only manifested in the use and control of users’ personal information and data, but also In order to control their needs and behaviors, algorithms are constantly tapping the potential needs of users, and even inducing their needs, boosting consumerism.
While external forces strengthen their control over individuals through algorithms, the individuals themselves, guided by data and algorithms, may also strengthen self-censorship in self-propagation or social interaction. The monitoring of algorithms will also be internalized into human self-discipline.
Scholar Duan Weiwen pointed out that as a new form of intelligent society, the data analysis society has quietly arrived. Its operation and governance are based on the use of data and intelligent algorithms to analyze human behavior, but this type of intelligent monitoring is speculative. Cognition, it is possible to misunderstand and improperly interfere with the initiative of the recognized object, and it is urgent to set boundaries for possible technical abuse(Duan Weiwen, 2020). There are also researchers calling for the fourth generation of human rights represented by digital human rights in the context of a smart society (Ma Changshan, 2019). But the realization of these goals still has a long way to go.
V. Conclusion: How do we fight against the imprisonment of algorithms?
Neil Bozeman mentioned two metaphorical warnings in “1984” and “Brave New World” in the preface of “Entertainment To Death”. “Orwell warned that people would be enslaved by external oppression, while Huxney believed that it is not the fault of the big brother that people lose their freedom, success and history. In his opinion, people will gradually fall in love with oppression and worship those who make They lose the ability to think about industrial technology.”(Bozman, 2004)
The algorithm is a combination of these two risks in a sense:
On the one hand, the more accurate an algorithm calculates people, it means that it has a deeper understanding of people. Therefore, it may monitor and control people more deeply;
On the other hand, when the algorithm has a deeper understanding of people, The more “in place”, the more satisfaction people get from it, and the more reliance on and compliance with algorithms. When the algorithm penetrates into all aspects of social life, people’s dependence on it becomes inertia, and people may become more and more unaware of the imprisonment brought about by the algorithm.
“The algorithm society has pushed the tension between freedom and shackles to the extreme” (Qi Yanping, 2019), on the one hand, algorithms are promoting adults The liberation and expansion of some of their abilities, on the other hand, they use some methods to achieve confinement to people. However, when we deeply reflect on the various imprisonments that algorithms impose on people, our goal is not to shut out algorithms. This is just like our attitude towards cars.
The entry of automobiles into our lives has brought both positive and negative effects. However, the human solution is not to prohibit the use of automobiles, but to train driving skills and establish and implement strict traffic laws. To minimize the possible harm. Similarly, when algorithms become a widely used technology, which may bring imprisonment risks to people in many ways, we cannot simply prohibit the use of algorithms. In addition to making necessary adjustments at the legal and institutional levels, it is also necessary to face the new characteristics of the algorithmic society and cultivate the corresponding qualities and capabilities of different subjects.
For algorithm developers, the advocacy and cultivation of new technical rationality and algorithm ethics are particularly critical.
In recent years, there is no lack of technical rational criticism in the criticism of algorithms and other intelligent technologies. Although reflection and criticism are necessary, as some scholars have pointed out, there is a misunderstanding in some criticisms of technological rationality, that is, equating technological rationality with instrumental rationality, and that the promotion of technological rationality will inevitably lead to the decline of value rationality(Zhao Jianjun, 2006). Although some studies do not involve the concept of technical rationality, they habitually equate technical thinking with tool rationality, that is, intentionally or unintentionally, technology must be oriented by tool rationality.
However, as some researchers have realized, technical rationality should be the internal unity of instrumental rationality and value rationality. It is only because of the inherent tension between these two kinds of rationality that technical rationality is always in an internal contradictory movement.
With the expansion and deepening of human technical practice activities, the inherent contradictions of technical rationality are presented in a one-dimensional and alienated form, that is, instrumental rationality overwhelms value rationality, and technological value rationality shrinks to an extremely inflated one. Simple appendages of instrumental rationality (Liu Xiangle, 2017).
Today, it is necessary to re-understand the “meaning in the question” of technical rationality, and advocate the integration of value rationality and tool rationality, technical thinking and humanistic spirit among algorithm developers, instead of pushing algorithms to tools The extreme of reason. On this basis, fully explore the goals, principles and implementation paths of algorithm ethics, and make them a check and balance for algorithm developers.
For algorithm users, the algorithm age has brought new requirements for human literacy. In addition to general media literacy and digital literacy, a certain algorithm literacy is also required.
Like media literacy, the premise of advocating algorithm literacy is not to simply regard algorithms as bad and to make people reject algorithms altogether, but to make people realize that algorithms cannot be avoided in today’s era. Therefore, it is important to understand how different types of algorithms work, at which levels the algorithm affects our cognition, behavior, and social relationships, and our survival and development. On this basis, we learn to coexist and confront with algorithms. The risk of the algorithm better protects the legitimate interests and status of people themselves.
In the face of an unavoidable algorithmic society, we can only become the masters of algorithms, not their “prisoners”, only by improving our knowledge and ability to control algorithms.
The original article was published in the first issue of “Global Media Journal” in 2021.
This article is from WeChat official account:Global Media Journal (ID: GJMS2014) , author: Peng Lan (Journalism and Social development Research Center of Renmin University of China Professor, Chinese people’s greatProfessor of School of Journalism)