- Vishal Shah, Assistant Professor, Central Michigan University, US, email@example.com
- Carlo Bellini, Associate Professor, Federal University of Paraiba, firstname.lastname@example.org
Description of Proposed Track:
The Internet is a powerful means for people to share information freely and reliably. This is possible due to the Internet’s technological infrastructure, governance principles, global reach, and Web 2.0 features that enable on-the-scene, real-time, user-generated content. However, governments around the world have been censoring online content or building their own regional Internet infrastructure in order to manipulate information, create particular visions of the information world, and ultimately dominate their people (Naím & Bennett 2015). Governments may also reframe available online information into useful information for their intents.
While governments challenge the world of free information in a systematic fashion and with long-term intents, certain individuals also act alone or in groups to manipulate information with short-term goals based on incidental motivations and convenient opportunities. Interestingly, such opportunities emerge in regions where governments do not censor the flow of information in cyberspace, that is, where information democracy is the norm. In such places, certain individuals may want to cause instant damage to other individuals or institutions, and they find opportunities in distributing false information to a large audience given the Internet’s reach. Perpetrators engage in information frauds even though oftentimes such frauds can be detected with a mere inspection of other relevant sources also available on the Internet. This is the case in a large number of situations, such as when individuals distort a politician’s image, a country’s economic or social indicators, or a company’s effectiveness in customer service. Life expectancy of certain false online information is short, but such falsehoods can exert immediate damage to their targets – and there is virtually no penalty for such crimes since legislations regulating the spread of false information on the Internet is largely missing in the democratic world and across countries.
Though the Internet is commonly recognized as the best tool to promote quality information inasmuch as quality can be asserted by accuracy, completeness, timeliness and source transparency, in fact it has been used also to spread false information. False information/rumors are extremely powerful to ignite the emergence of an anti-information/anti-intellectual society. Particularly dangerous in cyberspace is the use of evidence-based data to craft false arguments, usually by resorting to incomplete data and ingenious views on correlations; malicious use of factual data has been creatively termed “weapons of math destruction” (O’Neil 2016). In other words, factual data may be used to create false information and narratives that linger as well as sow discord in the human mind. The information revolution now needs to survive the information wars and restore credibility before a modern version of the Roman catacombs – such as the Deep Web – is needed for people to share and consume real, useful information safely and accurately.
In this scenario, information processing has become increasingly cognitively demanding as we are confronted with information of mixed quality. We approach information of unknown quality, and it approaches us in everyday contexts especially through our mobile devices and services such as social media. The processing of information stems from the fundamental need to connect and be part of the world around us (Maslow 1971). However, in addition to the aforementioned deliberate ill-uses of information by third parties, information overload is also a serious threat to our capacity to process information and make good decisions based on it (Eppler & Mengis 2004). As a consequence, also at risk is our expectation of being effective in the digital society – i.e., of making use of technology-mediated information vis-à-vis a purpose and in a systemically healthy way (Bellini 2018).
Accordingly, as recent events throughout the world have shown, social media platforms are effective means to promote false narratives that amplify bias and try to influence public opinion. There are 3.2 billion social media users in a population of 7.6 billion individuals, of which 2.7 billion are active through their mobile devices (Kemp 2018). Given the spread of information of mixed quality and the fact that bounded rationality (Simon 1979) is a permanent limitation for us to deal with information overload, the situation is ripe for opportunists to spread false information – aka fake news – in multiple online platforms. The ability of an individual, a group or state agents to use platforms like social media to spread false information has indeed amplified, as evidenced recently in political campaigning and elections (Allcott & Gentzkow 2017; Marchi 2012). As research by Lazer et al. (2018) points out, the global society needs new safeguarding standards and novel frameworks to approach this problem.
Our purpose in this track is to provide a forum for such safeguards. We encourage papers that address the broad area of information spread and technology use, and their effects in biasing personal and/or political decision-making. This track specifically encourages submissions of research exploring innovative ways to identify the mechanisms and causes of spreading false information and ways to deal with these mechanisms in the context of rhetoric, design, and social media. We invite submissions that elaborate causes and impacts of false information such as conceptual and theoretical developments, empirical research findings, case studies, research in progress, methodology papers, and other high-quality contributions. Submissions detailing research on measures (either theoretical measures or behavioral interventions, or the design of novel artifacts) to prevent the spread of false information are also welcome.
Opportunities in Leading Journals (if any):
Promising papers will be fast-tracked to BAR – Brazilian Administration Review upon the authors’ consent. BAR is the international flagship journal of the Brazilian Academy of Management (ANPAD). It is indexed in Scopus.
Mini-track: Rhetoric, technology, and disinformation
Mini-track chair: James Melton, Central Michigan University, email@example.com
This minitrack seeks to explore the relationship between rhetoric, social media platforms, and disinformation. One of the ways to deal with disinformation and to avoid exacerbating biases is to have a general population trained in rhetoric. Because the discipline of rhetoric studies the effects of persuasion on audiences, it can help make those audiences more aware of mechanisms of spreading disinformation. For example, recent papers studied how to inoculate people against misinformation by asking them to play roles such as “clickbait monger” seeking to get clicks themselves or to act as “conspiracy theorist”, and found that when made aware of the ease that misinformation could be spread, people were more likely to be critical of it in the future (Roozenbeek et al. 2018; van der Linden et al. 2017). Such interventions demonstrate that rhetorical awareness of mechanisms that enable the spread of disinformation can help combat bias through awareness. We welcome papers at the intersection of rhetoric, psychology, and information systems that attempt to solve the problem of disinformation from an interdisciplinary standpoint.
Mini-track: User experience, human-computer interaction, and design of (dis)information
Mini-track chair: Gustav Verhulsdonck, Central Michigan University, firstname.lastname@example.org
This minitrack seeks papers at the intersection of User Experience (UX) design, Human-Computer Interaction (HCI) and disinformation. Design for user experiences is one way to tackle the problem of disinformation. Today’s technological devices may promote the engagement of a user by designers utilizing deep knowledge of the user’s behavior and psychology (Choi & Kim 2004; Chou & Ting 2003). Persuasive design and design for behavior motivate users to stay longer on a platform by “gaming” their behavior or decisions through the design of an interface (Fogg 2002; Lockton et al. 2010). This can range from simplifying a design with a clear call-to-action so that the user makes a purchase, coax them into staying on the platform, or from deceptive practices where threatening language is used to prevent users from opting in/out of policies (aka “confirmshaming”). Often, design practices can serve to clarify things for the user, but they may also utilize disinformation and serve the underlying economic motive of the platform. What mechanisms can help prevent disinformation from a design point of view? Which design practices should UX designers consider to counter disinformation and develop more transparent, ethical design for users? We encourage all types of papers dealing with the design of disinformation exploring issues of agency, platforms, and design in light of the challenges of user experience.
Mini-track: Social media and disinformation
Mini-track chair: Rishikesh Jena, University of Alabama, email@example.com
This minitrack seeks papers that elaborate and/or address the underlying causes of disinformation through technological means. Researchers have identified how false information is spread more quickly, deeper, and further due to human nature accepting rumors more quickly over truthful statements (Vosoughi, Roy & Aral 2018). The use of social technologies, which allow for quick dissemination of information further encourages this dynamic by offering strong user engagement but little to no context to users. A balancing act is required in the use of these technologies between mechanisms for disseminating information while allowing us to check the validity of this information. Technological developments (algorithms, big data, artificial intelligence, Internet of Things and smart technologies) hold the promise of combating misinformation. At the same time, artificial intelligence, big data, and algorithms offer little to no access to information that they make inferences about our online actions that are often used to present advertisements or information to us. In this track, we are therefore looking for research on the diverse causes of misinformation/disinformation in social technologies and a variety of ways that these technologies can help us combat it.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-36.
Bellini, C.G.P. (2018). The ABCs of effectiveness in the digital society. Communications of the ACM, 61(7), 84-91.
Choi, D., & Kim, J. (2004). Why people continue to play online games: In search of critical design factors to increase customer loyalty to online contents. CyberPsychology & Behavior, 7(1), 11-24.
Chou, Y.J., & Ting, C. C. (2003). The role of flow experience in cyber-game addiction. CyberPsychology & Behavior, 6(6), 663-675.
Eppler, M.J., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The Information Society, 20(5), 325-344.
Fogg, B.J. (2002). Persuasive technology: Using computers to change what we think and do (interactive technologies). San Francisco, CA: Morgan Kaufmann.
Kemp, S. (2018). Global digital report 2018. Retrieved from: https://wearesocial.com/blog/2018/01/global-digital-report-2018
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094-1096.
Lockton, D., Harrison, D., & Stanton, N.A. (2010). Design with intent: 101 patterns for influencing behaviour through design v.1.0, Windsor: Equifine.
Marchi, R. (2012). With Facebook, blogs, and fake news, teens reject journalistic “objectivity”. Journal of Communication Inquiry, 36(3), 246-262.
Maslow, A. H. (1971). The farther reaches of human nature. London, UK: Penguin Books.
Naím, M., & Bennett, P. (2015, February 16). The anti-information age: How governments are reinventing censorship in the 21st century. The Atlantic. Retrieved from: https://www.theatlantic.com/international/archive/2015/02/government-censorship-21st-century-internet/385528/
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York, NY, USA: Crown Publishers.
Roozenbeek, J., & van der Linden, S. (2018). The fake news game: Actively inoculating against the risk of misinformation. Journal of Risk Research, DOI: 10.1080/13669877.2018.1443491
Simon, H. A. (1979). Rational decision making in business organizations. American Economic Review, 69(4), 493-513.
van der Linden, S., Maibach, E., Cook, J., Leiserowitz, A., & Lewandowsky, S. (2017). Inoculating against misinformation. Science, 358(6367), 1141-1142.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 2359, 1146-1151.