Order Number |
636738393092 |
Type of Project |
ESSAY |
Writer Level |
PHD VERIFIED |
Format |
APA |
Academic Sources |
10 |
Page Count |
3-12 PAGES |
2 sentences for each of the below questions
The Crisis in Journalism
Internet-based companies have used technology to disrupt existing industries, undermining the financial foundation for traditional journalism (Franklin 2011; Jones 2009; McChesney and Pickard 2011; Meyer 2009). Subscriptions that had once funded newspaper journalism plummeted as users flocked to “free” online content.
Print advertising, which had made up the bulk of revenue for news organizations, also fled to the internet; Craigslist and eBay replaced the newspaper classified ads, whereas Google, Facebook, and online ad brokers replaced display ads. As users and advertisers moved online, publishers decided they had to follow.
Stand-alone news websites offered free online content, reinforcing the expectation that news should be available without cost. Some introduced pay walls to try to recapture some lost revenue. In the hope of finding greater readership, “distributed content” became common, where publications allowed their content to appear on Facebook and other platforms.
Unfortunately, of the people who find a news story from social media, about two-thirds remember the social media site where they found it, but fewer than half remember which news outlet originally published it (Kalogeropoulos and Newman 2017). Still, publishers competed to create content that met the format and content preferences of those platforms.
When Facebook research showed users engaged with video presentations more than text, the call for news outlets to “pivot to video” followed. In one example, The Washington Post, best known for its sober political coverage, began creating scripted funny videos as a way to attract more users via distributed content (Bilton 2017).
That is a change from how news organizations have operated in the past. At legacy news sites whether the printed newspaper or online website news organizations offer the user a package of content. Users might skim the headlines, check out the sports, and delve deep into a feature article all from a single news outlet.
That means the editorial staff at the outlets produces a well-rounded package of information and news, along with lighter lifestyle and entertainment stories. With distributed content, though, each story or video must stand on its own. Users graze across many different outlets without ever leaving the Facebook or Apple News platform where they first see the content.
They may not click on that serious Post story on health care reform, but they might watch a funny video. When the financial success of news outlets comes increasingly to rely on the “success” of clicks on individual articles, the dynamics of journalism change. Fed with the metrics that measure every move of a reader online, editors cannot help but be influenced by the likely popularity of a story when making decisions about what is worth assigning or writing about and what is not.
Rather than bypass gatekeepers, as some had predicted, the internet has merely created a new category of gatekeepers. As one journalism study (Bell and Owen 2017) of the situation put it, “There is a rapid takeover of traditional publishers’ roles by companies including Facebook, Snapchat, Google, and Twitter.
These companies have evolved beyond their role as distribution channels, and now control what audiences see and who gets paid for their attention, and even what format and type of journalism flourishes” (p. 9).
Meanwhile, as we saw in Chapter 3 , print journalism jobs continued to plummet, newspapers closed, and the rise in internet-based journalism employment did not come close to keeping up with job losses elsewhere. Cuts hit local and state news organizations especially hard, often leaving city halls and statehouses with minimal coverage or none at all.
Some scholars have tried to strike a more positive tone, arguing that other developments offset the economic and technological challenges that news organizations have faced (Alexander, Breese, and Luengo 2016).
For example, a generation of quality journalists has taken up the new tools of digital journalism, More important, they claim, in the face of economic and technological trials, journalism has produced a robust defense of its goals and purpose in our culture, even if the traditional mechanisms to deliver that journalism are less viable.
In a digital world, assisting citizens’ involvement in democratic life and holding those in power accountable continue to be journalism’s reason for being. However, developments online are making those tasks more difficult than ever to achieve.
Information Distortions: Misinformation and Echo Chambers
Forty-seven minutes after news appeared about a high school mass shooting in Parkland, Florida, in 2018, right-wing posters on an anonymous chat board known for racist and anti-Semitic content were already plotting how to respond.
They decided to try to influence public perception of the event by spreading the lie that the students interviewed afterward were “crisis actors” performers pretending to be students and that the event was a “false flag” staged to generate support for gun restrictions.
Right-wing activists have used this tactic on other occasions, including after the Sandy Hook, Connecticut, and Aurora, Colorado, shootings. Over the next few hours, they scoured the students’ social media feeds looking for anything they could use against them. They created memes ridiculing the students and questioning their truthfulness.
They darkened photos of the shooter so he would not appear so white. Before the end of the day, right-wing conspiracy radio host Alex Jones was raising the possibility of a “false flag” on his Infowars program. After posters found out that one student was the son of an FBI agent, they promoted this as “evidence” that the event was part of a larger FBI-run anti-Trump campaign.
The Tweets and memes circulated rapidly through social networks, with Donald Trump Jr. even “liking” a tweet about the supposed anti-Trump campaign. As these fallacies circulated, people outraged by the offensive claims criticized them, inadvertently helping spread them across the internet.
Within the week, the number one “Trending” video on YouTube labeled the FBI agent’s son a fake “actor.” One regular poster in another right-wing forum put it this way the day after the Parkland attack, “There’s a war going on outside . . . and it is only partially being fought with guns. The real weapon is information and the attack is on the mind” (Timberg and Harwell 2018; Yglesias 2018).
Right-wing memes like this one falsely suggest that tragic mass shootings were actually staged by liberals and populated by “crisis actors” playing the roles of victims. Such messages try to sow seeds of doubt about the authenticity of news, encourage divisiveness, and undermine any calls for gun legislation.
The ability of a small number of anonymous users to influence the national discussion of major issues speaks to the power of social media. The decentralized internet offered the promise of democratic participation and a “participatory culture” (Jenkins 2009) without the gatekeepers that controlled traditional media.
Ironically, highly centralized, corporate-owned social platforms emerged to display user work, host discussions, and facilitate networking. Some of this was beneficial: Charitable causes could crowdsource funding for their projects. Activists could use Twitter to help organize against repressive regimes.
Citizens could start Facebook groups to help address community concerns. Amateurs could share their creative talents on YouTube and post instructional do-it-yourself videos on an incredible range of topics. Reddit users could find a treasure trove of information in sub-forums on countless topics.
However, in bypassing traditional news media gatekeepers, information—and misinformation—could travel quickly and unimpeded across social networks because of how social media platforms work (Cacciatore, Scheufele, and Iyengar 2016). First, to serve the needs of advertisers, social media sites use their algorithms to divide users into tiny niche groups and steer users toward the same kind of content for which they have already shown a preference.
Second, amid an abundance of varied content, users may select only information consistent with their views. Third, users can also interact only with like-minded individuals in self-selected online social networks. The result can be “echo chambers” (Sunstein 2002) or “filter bubbles” (Pariser 2011), where users are never exposed to alternative views but have their existing views constantly reinforced.
If users “like” stories or videos taking one side or another on a social or political issue, the algorithms will feed them similar stories and downplay opposing views. If users “follow” active Twitter accounts or “subscribe” to YouTube channels that share political content with which they agree, they will be exposed to a steady stream of reinforcing messages. Over time, Facebook news feeds, Twitter streams, YouTube recommendations, and other sources can all amplify a single point of view.
Sometimes the promotion of fake news is not for political purposes. People can make money by attracting viewers who are sold to advertisers. One news story traced a stream of largely fabricated pro-Trump stories to a website created by a 22-year-old computer science student in Georgia, one of Russia’s former republics. T
he student said he’d tried to promote Hillary Clinton at first, but his site did not get many views. He switched to fabricating clickbait stories that promoted Donald Trump with headlines such as “Oh My God! Trump to Release Secret Document That Will Destroy Obama!” As a result, his traffic—and revenues—soared. “For me, this is all about income,” he said (Higgins, McIntire, and Dance 2016).
One way to look at this development is as the digital version of the “limited effects” model (Bennet and Iyengar 2008, 2010). From this perspective, social media’s influence on political communication is limited because users are self-selecting what they are exposed to and algorithms are just serving up content that reinforces existing attitudes and beliefs. Such arguments, though, are subject to similar critiques made about the earlier limited effects work: They overemphasize the importance of changing people’s minds
Computational Propaganda: Trolls and Twitter Bots
Facebook’s own published research shows that the social media platform can influence voter registration and turnout (Bond et al. 2012; Jones et al. 2016). In a randomized, controlled experiment involving 61 million Facebook users during the 2012 election cycle, the company tweaked the news feeds of some of them and increased voter turnout by more than 340,000, a potentially significant number. In 2016, voter registration spiked when Facebook temporarily placed a simple reminder encouraging people to register to vote (Chokshi 2016). These examples are a reminder of the potential power of social media—and the potential for abuse.
So far, Russian interference in the 2016 U.S. presidential election is the most prominent—but certainly not the only—example of computational propaganda, “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks” (Woolley and Howard 2017: 6).
Although the impact it had on voter turnout or voter preference is unclear, election inference was aimed at helping Donald Trump win the presidency. The various U.S. intelligence agencies investigated this interference, and the declassified summary of the joint Intelligence Community Assessment (2017) concluded:
We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency.
We further assess Putin and the Russian Government developed a clear preference for President-elect Trump. . . . We also assess Putin and the Russian Government aspired to help President-elect Trump’s election chances when possible by discrediting Secretary Clinton and publicly contrasting her unfavorably to him. (p. ii)
From this assessment and media accounts (Dewey 2016; Parkinson 2016; Reed 2016), we know that Russian operatives bought ads to spread false information, created fake Facebook groups and Twitter accounts to rile up the electorate and spread disinformation, and even organized both sides of competing protests to stir up discord.
For example, a Russian effort created a “Heart of Texas” Facebook group that eventually had 225,000 followers and a corresponding Twitter account. The group organized a series of anti-Clinton and anti-immigrant rallies in Texas just days before the election.
Many similar efforts took place, including one that created an anti-Muslim rally in Idaho promoted as “Citizens Before Refugees” (Bertrand 2017). In Michigan, one of the key battleground states, junk news spread by social media was shared just as widely as legitimate professional news in the days leading up to the election (Howard, et al. 2017).
At this writing, the FBI’s investigation into Russian meddling in the election is continuing, but we already know a considerable amount about using media in such efforts in the United States and elsewhere.
One overview of the current state of knowledge about computational propaganda comes from an Oxford University project carried out by an international team of 12 researchers (Woolley and Howard 2017). The researchers examined case studies of computational propaganda in nine countries, including the United States, Brazil, the Ukraine, Russia, and China.
They interviewed 65 leading experts in the topic; identified large social networks on Facebook, Twitter, and Weibo (the Chinese micro-blogging site that is like a mix of Twitter and Facebook); and analyzed tens of millions of posts on seven different social media platforms during periods of intensified propaganda efforts around elections and political crises.
These social media accounts are important because, as the researchers note, in some countries “companies, such as Facebook, are effectively monopoly platforms for public life” and are “the primary media over which young people develop their political identities” (p. 2).
The researchers found widespread computational propaganda that employed different tactics and took on different characteristics in different settings. In authoritarian countries, “social media platforms are a primary means of social control,” and some platforms are controlled or effectively dominated by government and disinformation campaigns aimed at their own citizens.
For example, nearly half of Twitter activity in Russia is managed by highly automated government-connected accounts. In democracies, advocates or outside forces can use social media platforms to try to manipulate broad public opinion or to target specific segments of the population. In such cases, large numbers of fake accounts are set up and managed to give the appearance of widespread public support or opposition to an issue or candidate. (Fake accounts are a broader problem for Facebook and Google.
They charged advertisers by the number of clicks on their ads, but it is well-known that a significant percentage of these clicks are produced by bots using fake accounts. The industry publication AdWeek estimates that one out of six dollars in online advertising is spent for fraudulent clicks [Lanchester 2017].)
The researchers note that “[t]he most powerful forms of computational propaganda involve both algorithmic distribution and human curation bots and trolls working together” (p. 5). They point out that social media bots used for political manipulation “are also effective tools for strengthening online propaganda and hate campaigns. One person, or a small group of people, can use an army of political bots on Twitter to give the illusion of large-scale consensus” (p. 6).
Right-wing organizations and causes are the source of most misinformation in the United States (Howard et al. 2017). During the 2016 presidential election, a network of Trump supporters on Twitter shared the greatest variety of junk news sources and circulated more junk news items than all other groups put together; extreme-right groups did the same on Facebook.
Hate and Censorship
On May 18, 2015, at 11:38 a.m., President Barack Obama posted his first Tweet from the newly opened @potus Twitter account. Presidential tweeting was a novelty then, and his friendly first greeting was, “Hello, Twitter! It’s Barack. Really!” It took only 10 minutes for the racial epithets to start; at 11:48 someone replied “get cancer nigger” (Badash, 2015).
New technologies have enabled old racism to flourish the latest media content filled with racist overtones and imagery include Tweets (Cisneros and Nakayama 2015), viral videos (Gray 2015), memes (Yoon 2016), and even search engine results (Noble 2018) and racist hatred permeates the web (Jakubowicz et al. 2017).
Racism and hatred more broadly seems to thrive online. At home, a broad variety of hate groups uses the internet to recruit, organize, and spread lies. Globally, terrorist groups do the same. These groups used to rely on mainstream media to publicize their cause. As Barnett and Reynolds (2009) note, acts of terrorism were primarily efforts to attract “the attention of the news media, the public, and the government.
As coverage of September 11 showed, media are delivering the terrorist’s message in nearly every conceivable way” (p. 3). Some critics argue that mainstream news media often indirectly assist terrorists in publicizing both their grievances and their capabilities. However, in recent years, terrorists have relied more heavily on their own media.
The internet affords global terrorist groups and their supporters opportunities to communicate through both social media sites like YouTube and their own websites, which include discussion groups, videos, political articles, instruction manuals, and leaders’ speeches (Seib and Janbek 2011). They also can use the internet for encrypted communications.
In the wake of Russian interference with the 2016 presidential election that used these platforms, public concern grew, and elected officials began considering possible regulation if the companies did not address the most egregious issues. Now on alert, the corporations that owned the platforms began stepping in to try to identify and prevent “fake news” and hate sites.
Google’s head lawyer announced new steps to combat terrorism content on its YouTube platform, including hiring more humans to staff their “Trusted Flagger” program. It also would devote “more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content;” in other words, it would tweak their algorithms (Walker 2017). Twitter (2017), too,
RUBRIC | |||
Excellent Quality
95-100%
|
Introduction
45-41 points The context and relevance of the issue, as well as a clear description of the study aim, are presented. The history of searches is discussed. |
Literature Support
91-84 points The context and relevance of the issue, as well as a clear description of the study aim, are presented. The history of searches is discussed. |
Methodology
58-53 points With titles for each slide as well as bulleted sections to group relevant information as required, the content is well-organized. Excellent use of typeface, color, images, effects, and so on to improve readability and presenting content. The minimum length criterion of 10 slides/pages is reached. |
Average Score
50-85% |
40-38 points
More depth/information is required for the context and importance, otherwise the study detail will be unclear. There is no search history information supplied. |
83-76 points
There is a review of important theoretical literature, however there is limited integration of research into problem-related ideas. The review is just partly focused and arranged. There is research that both supports and opposes. A summary of the material given is provided. The conclusion may or may not include a biblical integration. |
52-49 points
The content is somewhat ordered, but there is no discernible organization. The use of typeface, color, graphics, effects, and so on may sometimes distract from the presenting substance. It is possible that the length criteria will not be reached. |
Poor Quality
0-45% |
37-1 points
The context and/or importance are lacking. There is no search history information supplied. |
75-1 points
There has been an examination of relevant theoretical literature, but still no research concerning problem-related concepts has been synthesized. The review is just somewhat focused and organized. The provided overview of content does not include any supporting or opposing research. The conclusion has no scriptural references. |
48-1 points
There is no logical or apparent organizational structure. There is no discernible logical sequence. The use of typeface, color, graphics, effects, and so on often detracts from the presenting substance. It is possible that the length criteria will not be reached. |
Place the Order Here: https://standardwriter.com/orders/ordernow / https://standardwrit