THE ROLE OF SOCIAL MEDIA PLATFORMS IN COMBATING HATE SPEECH BY - ROHIT SHARMA & DR. AJAYA KUMAR BARNWAL

THE ROLE OF SOCIAL MEDIA PLATFORMS IN COMBATING HATE SPEECH

 

AUTHORED BY - ROHIT SHARMA[1]

& DR. AJAYA KUMAR BARNWAL[2]

 

 

Abstract

Scholars and philosophers alike have paid close attention to the problem of hate speech. However, rather than first conceptually analyzing the word "hate speech," the great majority of this emphasis has been directed at presenting and critically assessing arguments for and against speech bans. Social media platforms are increasingly under scrutiny for their role in facilitating hate speech. In response, they have implemented various strategies to combat this pervasive issue. These strategies often include the use of artificial intelligence and machine learning algorithms to detect and remove hate speech content swiftly. Additionally, platforms have enhanced community guidelines and user reporting systems to empower users in flagging and reporting offensive content. Furthermore, collaboration with experts in sociology, psychology, and linguistics has aided in refining detection methods and understanding the nuanced nature of hate speech. Moreover, platforms have invested in educational campaigns to raise awareness about the impact of hate speech and promote digital literacy among users. Despite these efforts, challenges persist, such as the balance between free speech and censorship, the adaptability of hate groups to circumvent detection algorithms, and the global nature of social media platforms requiring culturally sensitive approaches. Moving forward, continued collaboration between platforms, policymakers, civil society, and users will be crucial in fostering a safer and more inclusive online environment.

 

Keywords: Hate speech, social media, Internet, Facebook, Twitter, WhatsApp

 

 

 

Introduction

Social media has played an increasingly important role in domestic and international politics. Until recently, commentators and researchers on social media lauded its capacity to equalize the political playing field, giving voice to marginal groups and new actors. In a new media culture in which anonymous entrepreneurs can reach massive audiences with little quality control, the possibilities for those vying to become digital celebrities to spread hateful, even violent, judgements with little evidence, experience, or knowledge are nearly endless.[3] Digital hate culture grew out of the swarm tactics of troll subcultures, but has been co-opted for a political purpose, with automated accounts or “bots”—some of whom were associated with recent Russian information operations—adding to existing groups of users seeking to hijack information flows on social media platforms. As Angela Nagle writes, those participating in digital hate culture are zealots in a “war of position” seeking to change cultural norms and shape public debate. I use the term “digital hate culture” rather than “alt-right,” “neo-Nazi,” “white nationalist,” “white supremacist,” “fascist,” or “racialist”—all subgroups that have a home in digital hate culture—to refer to the complex swarm of users that form contingent alliances to contest contemporary political culture and inject their ideology into new spaces. Digital hate culture is united by “a shared politics of negation: against liberalism, egalitarianism, ‘political correctness’, and the like,” more so than any consistent ideology.[4] In introducing the concept of digital hate culture in the context of politics and international relations, readers should be cautioned against a reading of digital hate culture as a global phenomenon, though similar cultural processes are at play throughout the world. To effectively combat digital hate culture, we need to understand its formal characteristics. In doing so, I focus on the strategies and tactics used by exponents of digital hate culture rather than the hateful language that they express. This draws out the dangerous elements of its speech and its ungovernability. Digital hate culture goes beyond offense; it employs dangerous discursive and cultural practices on the Internet to radicalize the public sphere and build support for radical right populist parties. By explaining its characteristics, I explore the cultural politics of digital hate and the codes that it uses to flout hate speech laws and content regulation by private actors. In doing so, I argue that digital hate culture is ungovernable, but with the right knowledge and tools democratic processes can work towards managing its dangerous effects.

 

How Can Social Media Tackle Hate Speech

Social media corporations have accepted racist, homophobic, and anti-Semitic remarks and screeds as a necessary part of doing business for years, but they have done very little to keep hate speech off their platforms. But in the last few years, social media has risen to prominence in the fight against hate speech, free expression, and the sociopolitical conflict.

 

Social media businesses are stuck in a difficult situation with many obstacles. In the words of the industry, they want to make the user experience enjoyable and "safe," but they also want to be perceived as supporting the American ideal of free expression. They appreciate the prominence that traditional media formerly held in this nation, but they reject regulation and the decades-long responsibility that came with it—namely, mediating the truth. Maybe their main goal is to continue increasing the number of users in order to increase revenue.

 

 "This is a serious threat because social media platforms are currently dominating the market, making large sums of money, and their influence is only increasing," Therefore, they will find themselves in a very different kind of predicament if the public and the political elite turn against them. Demands to restrict specific types of communication on social media have increased recently, both in India and abroad. Zeid Ra'ad al-Hussein, a former UN high commissioner for human rights, demanded that Facebook remove posts after accusing military leaders in Myanmar of inciting genocide on social media. Facebook complied.

 

Following attacks on Muslims earlier this year, the government of Sri Lanka shut down Facebook, WhatsApp, and other sites nationwide. The restriction was only lifted following a visit by Facebook representatives who pledged to reduce hate speech and misuse.

 

During multiple congressional hearings, social media has been held accountable for its actions. When asked to define hate speech at a testimony in April, Facebook CEO Mark Zuckerberg responded, "Senator, I think this is a really hard question, and I think it's one of the reasons why we struggle with it." Facebook has been urged to remove pages belonging to Holocaust deniers, but Zuckerberg has resisted.

 

Some people think social media corporations reinforce people's preexisting beliefs and exert excessive editorial control. Restricting social media's ability to decide what information spreads and what doesn't is one way to address the concern that social media is fostering an echo chamber effect that deepens divisiveness in our society.

 

However, following the 2017 election, there is a great deal of worry that social media is disseminating inaccurate or misleading information. The answer is for them to exert more editorial control. Social media is at a loss for words when you combine this with Cambridge Analytica and Trump's demands to control search results over what appears when he searches for his name on Google. "I have compassion for these organizations' leaders because I think they are morally upright and want to act morally, but it can be difficult to know what is right."

However, what if the public starts to avoid the Facebook news feed due to hate speech and the site's creepy and melancholy atmosphere? Facebook's greatest worry is that people would perceive it as a scary place to go if they want to feel horrible. They never want to present you with competing viewpoints because they know it will enrage you. Facebook will die the instant you start to think of it in the same way as smoking.

 

The Right to Say Anything

It is up to social media firms whether or not to take action against hate speech. However, they are not legally required to take any action as of yet.

 

In the United States, individuals are free to do as they like under the First Amendment. There is nothing stopping Facebook from acting in the way that Trump is accusing them of acting. Discrimination laws may stop them from discriminating on the basis of race and other factors, "but certainly not political ideology." They could declare, "We're only going to publish people who are members of the Republican party."[5]

 

"We have the right to remove this hate speech in accordance with our terms of service," and since they are a private company, they very definitely do. In actuality, a lot of Americans believe that social media actively participates in censorship. In a June Pew Research Center survey, 72% of respondents were asked if they thought it likely that social media platforms actively block political viewpoints that those firms deem disagreeable. Republicans were particularly prone to believe this: according to a Pew survey of 4,594 American adults, 85% of Republicans and Republican-leaning independents felt it was likely that social media platforms purposefully block political perspectives, with 54% saying it was very likely.

 

The inclination away from censorship was ingrained in social media's architecture even before the name "social media" was coined, despite the fact that social media corporations frequently refute accusations that they are intentionally suppressing political opinions. "Social media companies claimed they were tech companies rather than media companies when they first entered the public eye." They consciously stated, 'We are choosing to not engage in that kind of content discrimination, and will let all voices have equal access to our platforms,' even though they knew they had the authority and right to act as traditional media companies and serve an editorial function in deciding what to publish and what not to publish.[6]

 

Social media proved that it was no more responsible for the messages spread than phone companies were for the conversations that took place over their lines by eschewing the gatekeeper function.  Nonetheless, the courts haven't given much guidance on the matter of how much authority they ought to have. "If the statute is to be believed seriously, social media companies' authority will only extend to offensive or harassing content." "Court rulings have interpreted this responsibility as covering a wide range of categories, giving social media businesses considerable discretion over how their newsfeeds are displayed. There is a fair level of legal uncertainty because some courts have read it narrowly, which might expose businesses to significant liabilities.
The issue with the phone-line example is that, when someone picks up their phone, they never find themselves listening in on hundreds of Holocaust deniers and white supremacists. Hate speech is prohibited on Facebook "because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence," according to the company's community standards statement. A direct attack on someone based on one of the following categories—race, ethnicity, national origin, religion, sexual orientation, caste, sex, gender, gender identity, or serious illness or disability—is classified as hate speech.[7] We also offer some immigration status protections. Attacks are described as acts of violence or dehumanization, inferiority complex comments, or demands for isolation or segregation.

 

Questions of Interpretation

This raises some difficult issues regarding who gets to translate, the prejudices and experiences that each interpreter brings to the work, and more general context-related issues that algorithms are unable to take into account.

 

For example, when a community newspaper in Texas published the Declaration of Independence in the days preceding the Fourth of July, Facebook detected it and removed paragraphs 27–31. Slate reports that the trigger appears to have been a reference to "merciless Indian Savages." It was unclear whether the removal of a portion of our national guiding principle was solely algorithmic or entailed a layer of human assessment.

 

User agreements aren't much use when it comes to controlling speech because what one group views as free and acceptable speech may be viewed as inciting by another, according to Ron Berman, a marketing professor at Wharton. It is highly troublesome that many of these agreements rely on the blurry boundary between improper activity on the platform and improper consequences. For instance, a sizable portion of Catalans may view a Facebook demand for the region's independence from Spain as valid free expression, but if the call subsequently sparks a violent protest, it may lose its legitimacy.

Pressurize

There is increasing pressure on social media businesses to take action against hate speech, and "there is no doubt that the threat of regulation will have an impact on these companies' cultures." Black Lives Matter, according to some, denigrates other people. Some claim that because All Lives Matter is indifferent to people whose lives are in danger, it is racist. The only way to resolve these subjective issues is to refrain from restricting free expression. Giving government employees or players in the commercial sector this much discretionary power will only be detrimental.

 

However, social media platforms have a strong commercial case to suppress hate speech as much as possible. According to Berman, one danger associated with two-sided platforms such as Facebook is that they may experience a sudden "phase" transition from a positive to a negative state. "For instance, other advertisers... would not want to appear as condoning this advertising platform if it turns out that the Facebook ad-targeting algorithm allows advertisers to discriminate based on race, gender, or any other factor, or if the targeting algorithm would make it possible to promote hate speech," "From a public relations standpoint, I believe the problem is more with advertisers who might choose to stop using Facebook as an advertising platform because it will be perceived as permitting hate speech than it is with investors and regulators."

 

Facebook, YouTube and Twitter are hiring thousands of new moderators, or “News Feed integrity data specialists,” as Facebook calls them, to filter out content it considers to be in violation of its standards. But moderators are inconsistent, and that inconsistency puts minority users of social media at a disadvantage,[8] . The report cited Facebook users whose posts on racial matters were deleted by Facebook, but whose white friends, when asked to post the same content, found their posts were not deleted.

 

Don’t hold your breath for justice consistently applied. “The standards are irreducibly subjective, so the standard will be enforced with the subjective values of the enforcer,” In the Silicon Valley mindset, however, there is a belief that everything can be solved algorithmically “that there is a technical solution to every societal problem,”. “They believe they have the solution but just have not found it yet.”

 

Countering online hate speech through media and information literacy

This section provides insights on efforts to provide more structural answers to hate speech online through education, since the previous sections have primarily focused on reactive reactions to its proliferation. It examines a number of programs aimed at educating young people or working with educational institutions to raise their awareness of the problems surrounding and potential solutions for perceived hate speech on the internet.[9] Digital citizenship and civic education Through the study of rights, freedoms, and obligations, citizenship education aims to prepare people to be knowledgeable and responsible citizens. It has been used in a variety of ways in cultures that have just emerged from violent conflict (Osler and Starksey, 2005).

 

Raising knowledge of the political, social, and cultural rights of people and groups—including the freedom of expression and the obligations and societal ramifications that follow—is one of its primary goals. Effective reasoning and the ability to respectfully express one's personal thoughts and viewpoints have occasionally been listed among the learning objectives for citizenship education programs. The issue of hate speech and citizenship education is twofold: it includes teaching people how to recognize hate speech and how to respond to hateful messages.

 

One of its current problems is adjusting its objectives and tactics to the digital sphere, offering citizens the technological know-how and persuasive abilities they may require to combat hate speech on the internet. Some of the organizations in this study are putting forth a new definition of digital citizenship that connects media consumers and producers to larger ethical and civic issues while incorporating the fundamental goals of media and information literacy, which are to develop technical and critical skills. Relevant to this discussion is that global citizenship education (GCED) is one of the three main objectives of the UN Secretary-General's Global Education First Initiative (GEFI), which was introduced in September 2012, and one of the priority areas of action for UNESCO's Education Programme (2014–2017)129.

The goal of global citizenship education is to provide students of all ages with values, information, and abilities that uphold and foster respect for social justice, human rights, diversity, gender equality, and environmental sustainability. GCED equips students with the knowledge and skills they need to understand their rights and responsibilities in order to advance a better society and future for all.

 

Within this broader framework, media and information literacy are also promoted by UNESCO. This is a broad idea that encompasses both online and offline literacy. According to Hoechsmann and Poyntz (2012), it entails the development of the technical skills and abilities necessary to use digital technologies as well as the knowledge and skills required to locate, analyze, assess, and interpret particular media texts, as well as the creation of media messages and the recognition of their social and political influence. People who emphasize media literacy have started talking about the ethical ramifications of technology use, as well as the rights and obligations that come with it in terms of civic engagement and society.

 

Information literacy is no longer able to ignore concerns like critical citizenship, the right to privacy and free speech, and empowering people to participate in politics (Mossberger et al. 2008). It becomes essential to have several, complementing literacies. Social media and the development of new technology have been major factors in this change. People no longer just absorb media messages; they are now makers, creators, and curators of information. This has led to the emergence of new forms of engagement that interact with more conventional ones, such as voting or joining a political party. As a result, instructional approaches are evolving, moving from encouraging media messages to be critically interpreted to include empowering media content creators (Hoechsmann and Poyntz, 2012). The concept of media and information literacy itself continues to evolve, being augmented by the dynamics of the Internet. It is beginning to embrace issues of identity, ethics and rights in cyberspace.[10]

 

 

Education as a tool against hate speech

All of the projects and organizations that are discussed emphasize the value of media and information literacy as well as instructional tactics as successful ways to combat hate speech, albeit having unique features and objectives of their own. When compared to the difficulties in deciding whether to restrict or outright ban online content or the length of time and money it could take for legal actions to result in real results, they emphasize how educational approaches can offer a more systematic and practical response to hate speech.

 

 Many contend that the set of abilities found in media and information literacy can empower people and give them the know-how they need to react more quickly to communication that they consider to be hate speech. Given the focus that social networking sites are placing on individual reporting of situations of abuse, incitement to hatred, or harassment, these abilities may also be very crucial. On the other hand, those engaged in these initiatives acknowledge the value of using the legal framework as a guide for their work.

 

As Laura Geraghty from the ‘No Hate Speech Movement’ affirmed: The key to stopping hate speech online is education. Raising awareness and enabling individuals to use the internet responsibly are important, but prevention measures won't be helpful if there isn't a legal framework and the means to prosecute hate crimes, including online hate speech.[11] Many of the projects support a complementary perspective between the legal and educational aspects, and the majority involve educating the public about the tools and processes used by the legal system to prosecute people who spread hate speech online. Given the severity of the issue, media literacy and education may become a more crucial tactic in the fight against hate speech online, but they need also be taken into account in conjunction with other approaches. acquiring essential abilities to combat hate speech on the internet The emphasis on the development of critical thinking abilities and the morally responsible use of social media as launching platforms for media and information literacy skills to counter hate speech online unites the projects under analysis. It is anticipated that these media and information literacy skills will improve people's capacity to recognize and challenge hostile content on the internet, comprehend some of its presumptions, biases, and prejudices, and stimulate the development of counterarguments. It should come as no surprise that it's not always easy to spot hate speech on the internet. In a MediaSmarts-organized focus group, a teacher revealed: "It took me a few minutes to realize that I was on a website that was sympathetic to the Nazis." It was written in a way that was exceptionally nasty. It used prose to hide the true meanings of racism and hatred.

 

You know, verbally. Since my light bulb had not yet gone out when mine did, I actually had the kids have a look at it. They had no idea what they were observing. Some of them began to see it when I invited them to look a bit closer, while others were still unable to. They were intrigued by that since I was able to see something that they were unable to. That served as a means of allowing them to see and become intrigued by the notion that someone was genuinely spreading hatred, even though it didn't seem like it. The target audience for the initiatives under analysis is a broad spectrum of people who are impacted by and involved in online hate speech.

 

Focusing on marginalized communities and those who are more prone to become targets or recruiters of hate crimes is crucial for the participating organizations that are the subject of this study. These initiatives include youth and children as one of their primary target audiences.

 

According to Matthew Johnson, Director of Education at MediaSmarts, educating young people about how media are constructs that re-present reality and the importance of understanding the people who shape what they see will help them comprehend why hate content in games, music, and other seemingly "trivial" media still needs to be addressed. Education about the ideological messages that media convey about authority, power, and values will also help them understand why these messages have social and political ramifications. The school community, parents, and educators are often regarded as significant audiences because of their involvement in shielding kids from hostile information.

 

Targeted groups also include individuals who can significantly influence the online community that exposes hate speech, such as journalists, bloggers, and activists, as well as those who have the power to influence the legal and political landscape of hate speech, such as policy makers and non-governmental organizations.

Educational goals of media and information literacy to respond to hate speech

The three overarching educational objectives of these initiatives—to inform, analyze, and fight hate speech—remain consistent despite the differences in the content they each offer and the populations they target. These three objectives can be viewed as a continuum that includes progressive goals with distinct purposes. They each address a different facet of the issue and offer unique solutions for combating hate speech online. The first educational objective aims to enlighten people about hate speech; the second, to critically analyze the phenomena; and the third, to motivate people to take particular actions.

                     

Raising awareness of hate speech on the internet, its various manifestations, and its potential repercussions is one of the initiatives that falls under the category of information as an educational purpose. Additionally, they offer details on pertinent international, regional, and national legal frameworks. Examples of these initiatives can be found in multiple formats, for instance the video ‘No Hate Ninja Project - A Story About Cats, Unicorns and Hate Speech’ by No Hate Speech Movement,[12] the interactive e-tutorial ‘Facing online hate’ by MediaSmarts[13] or the toolbox developed by the project ‘In other words’.[14] The second learning objective is more intricate and centers on the evaluation of hate speech that appears online. This analysis comprises analyses and assessments of the various forms that hate speech takes on the internet, such as sexism, homophobia, and racism, as well as the various categories of hate speech that it takes. Critically analyzing hate speech to determine its common sources and comprehend its underlying presumptions and prejudices is a crucial component of the analysis. Hateful stuff on the internet can be reported and exposed by individuals through this analytical approach. In addition to the analysis problem, the Online Hate Prevention Institute's reporting platform allows people to report and track hate speech online by exposing what they consider to be hate speech, monitoring websites, forums, and groups, and examining offensive materials that other people have exposed. Lastly, encouraging measures that can be performed to counteract hate speech acts is the third educational purpose mentioned in these programs. The educational goal's resources are designed to encourage proactive measures and reactions to hate speech expressed online. The suggested courses of action differ based on the project's and the organization's objectives, ranging from mildly confrontational to highly aggressive; still, the primary goal is to enable people to confront and actively counteract harmful content. People in positions of power have widely taken advantage of the phenomena of openly voicing animosity towards specific groups in order to mobilise the crowd for violent action.[15] In addition to the analysis problem, the Online Hate Prevention Institute's reporting platform allows people to report and track hate speech online by exposing what they consider to be hate speech, monitoring websites, forums, and groups, and examining offensive materials that other people have exposed. Lastly, encouraging measures that can be performed to counteract hate speech acts is the third educational purpose mentioned in these programs. The educational goal's resources are designed to encourage proactive measures and reactions to hate speech expressed online. The recommended courses of action range from somewhat confrontational to highly aggressive, depending on the goals of the project and the organization; nonetheless, the main objective is to empower individuals to actively confront and resist bad content. The No Hate Speech Movement's training programs for bloggers, journalists, and activists, MediaSmarts' lesson plans and teaching materials, and the "In Other Words" project's suggested media monitoring guidelines are a few examples of these kinds of projects.

 

Evaluating educational programs related to media and information literacy While certain organizations and campaigns concentrate on the content of hate speech on the internet, others highlight the human side of it by highlighting the victims or the overall effects on the community. Whatever their aim, the majority of programs view the improvement of digital literacy as a crucial component in the fight against, exposure of, and prevention of hate speech online. The tools and tactics examined show a range of methods for cultivating these abilities, ranging from straightforward "how-to guides" to more intricate and specialized instruction.

 

 Videos, blogs, websites, video games, and social media are just a few of the many formats that are covered and examined in the many initiatives. These formats allow for the attraction and reach of a wide range of consumers. However, comprehensive studies are still absent, making it challenging to determine whether and how much these measures are effective in preventing hate speech or having an impact on the groups most inclined to participate in hate speech online.

 

For example, despite the fact that MediaSmarts' programs and resources have won numerous accolades, it is challenging to assess the programs' effectiveness because there is no clear indication of who utilizes them most. Regarding the "In Other Words" project, one of the anticipated outcomes was the creation of content for distribution; however, there is no information regarding the use of this content after it was published or the audiences it was intended for.

 

In the instance of the "No Hate Speech Movement," which has produced various tools and materials (such as instructional manuals, films, online platforms for reporting hate speech, and educational materials), there are also no explicit public rules for assessing or documenting impact.[16] While the majority of these initiatives are admirable and may provide effective tools to combat hate speech structurally, more data is required to comprehend how people incorporate newly acquired skills into their daily routines and lives and how this affects their online behavior.

 

Conclusion

Online hate speech is a developing problem, and in order to comprehend its importance and ramifications and to create effective countermeasures, we must work together. Outward demonstrations of outrage have been a common response, with some public figures demanding harsher penalties for those disseminating hate speech and more stringent regulations on online communication. However, as this study has shown, concentrating only on repressive measures might obscure the complexity of a phenomena that still requires a variety of various players in society to respond in a coordinated and customized manner and is still poorly understood.

 

Online environments provide a unique window into human behavior because of their capacity to encourage interaction and because they generate previously unheard-of volumes of data that can be analyzed using a wide range of novel methods. Improved knowledge of the emergence, interaction, and possible dissipation of various kinds of expression in this context is essential for developing effective solutions. This study has provided numerous specific instances of how various circumstances have prompted customized reactions. While the formation of each answer is associated with specific circumstances, study and dissemination of these responses provide a general palette of techniques that stakeholders can apply to diverse situations.

 

In order to help resolve some of the fundamental issues that characterize hate speech online, this closing chapter highlights the key results.

 


[1] Research Scholar at Banaras Hindu University Faculty of law, Varanasi

[2] Assistant Professor at Banaras Hindu University, Faculty of law, Varanasi

[3] Nigel Warburton, Free Speech: A Very Short Introduction (Oxford University Press, 2009); Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News Online,” Science 359, no.

[4] Evan Malmgren, “Don’t Feed the Trolls,” Dissent Magazine (Spring 2017), https://www.dissentmagazine.org/article/dont-feed-the-trolls-alt-right-culture-4chan; George Hawley, Making Sense of the AltRight (Columbia University Press, 2017).

[5] How Can Social Media Firms Tackle Hate Speech? - Knowledge at Wharton. https://knowledge.wharton.upenn.edu/podcast/knowledge-at-wharton-podcast/can-social-media-firms-tackle-hate-speech/

[6] How Can Social Media Firms Tackle Hate Speech? - Knowledge at Wharton. https://knowledge.wharton.upenn.edu/podcast/knowledge-at-wharton-podcast/can-social-media-firms-tackle-hate-speech/

[7] Why is Facebook Tweaking its Community Standards? - ToolsMetric. https://toolsmetric.com/blog/why-is-facebook-tweaking-its-community-standards-find-out-how-it-will-provide-clarity-on-satirical-conte/

[8] A report last year by the center for investigative reporting.

[9] Countering Online Hate Speech 3 - PDFCOFFEE.COM. https://pdfcoffee.com/countering-online-hate-speech-3-pdf-free.html

[10] Paris Declaration on MIL in the Digital Era. http://www.unesco.org/new/en/communication-andinformation/resources/news-and-in-focus-articles/in-focus-articles/2014/paris-declaration-on-mediaand-information-literacy-adopted/

[11] Interview: Laura Geraghty, No Hate Speech Movement, 25 November 2014.

[12] No Hate Speech Movement, No Hate Ninja Project - A Story About Cats, Unicorns and Hate Speech. Available online at: https://www.youtube.com/watch?v=kp7ww3KvccE

[13] MediaSmarts, Facing online hate. Available online at: http://mediasmarts.ca/tutorial/facing-online-hatetutoria

[14] In Other Words Project, Toolbox. Available online at: http://www.inotherwords-project.eu/sites/default/ files/Toolbox.pdf

[15] Aileen Donegan, “Debate 2: ‘Hate speech is more than free speech’”, No HAte Speech Movement Forum, 17 October 2013, http://forum.nohatespeechmovement.org/discussion/6/debate-2-hate-speech-ismore-than-free-speech/p1

[16] No Hate Speech Movement, Follow-Up Group, Fifth Meeting. Available online at: http://nohate.ext.coe. int/The-Campaign/Follow-Up-Group-of-the-Joint-Council-on-Youth2   

Current Issue

THE ROLE OF SOCIAL MEDIA PLATFORMS IN COMBATING HATE SPEECH BY - ROHIT SHARMA & DR. AJAYA KUMAR BARNWAL

Authors: ROHIT SHARMA  & DR. AJAYA KUMAR BARNWAL
Registration ID: 103107 | Published Paper ID: WBL3107 & WBL3108
Year : Aug - 2024 | Volume: 2 | Issue: 16
Approved ISSN : 2581-8503 | Country : Delhi, India

DOI Link : https://www.doi-ds.org/doilink/08.2024-89977651/THE ROLE OF SOCIAL MEDIA PLATFORMS IN COMBATING HA

  • Share on:

Indexing Partner