Russian ifunny

Russian ifunny DEFAULT

There’s a meme on Instagram, circulated by a group called “Born Liberal.” A fist holds a cluster of strings, reaching down into people with television sets for heads. The text declares: “The People Believe What the Media Tells Them They Believe: George Orwell.” The quote is surely false, but it’s also perfect in a way. “Born Liberal” was a creation of the Internet Research Agency, the Russian propaganda wing that might as well be part of Oceania. In other words, we live in a time when American democratic debate is being influenced by liars spreading memes about our inability to understand the truth.

This particular meme is one of many revealed in a new report released on Monday, commissioned by the Senate Intelligence Committee and written by New Knowledge, a cybersecurity firm whose director of research, Renee DiResta, is a WIRED contributor. This report, along with a second one written by the Computational Propaganda Project at Oxford University and Graphik, offers the most extensive look at the IRA’s attempts to divide Americans, suppress the vote, and boost then-candidate Donald Trump before and after the 2016 presidential election. The report sheds new light on the ways the IRA trolls targeted African Americans and the outsized role Instagram played in their work. It also calls into question statements tech executives have made under oath to Congress in the past 18 months.

The report by New Knowledge is based on a review of 10.4 million tweets, 1,100 YouTube videos, 116,000 Instagram posts, and 61,500 unique Facebook posts published from 2015 through 2017. This is not a complete data set of Russian influence operations, but it’s still the largest such analysis to take place outside of the companies themselves. And it shows that the Russians weren’t just running a bland content farm, churning out propaganda in broken English. The operation was deeply sophisticated, and at times, downright funny. As the report’s authors note: “The IRA was fluent in American trolling culture.”

The most explosive finding in the report may be the assertion that both Facebook and Google executives misled Congress in statements. The researchers suggest that Facebook “dissembled” about the IRA’s voter suppression efforts on the platform in written responses to Congress in October, following the testimony of chief operating officer Sheryl Sandberg in October. At the time, the company was asked: “Does Facebook believe that any of the content created by the Russian Internet Research Agency was designed to discourage anyone from voting?” Facebook responded: “We believe this is an assessment that can be made only by investigators with access to classified intelligence and information from all relevant companies and industries.”

A Facebook spokesperson added on Monday morning: “We continue to fully cooperate with officials investigating the IRA’s activity on Facebook and Instagram around the 2016 election. We’ve provided thousands of ads and pieces of content to the Senate Select Committee on Intelligence for review and shared information with the public about what we found.”

Nevertheless, the report lays out ample obvious examples of how Facebook and Twitter were both used to discourage turnout. In some cases, the trolls tried to mislead people into texting their votes. In others, they encouraged Americans to vote for third-party candidates like Jill Stein or give up on voting all together, with messages that read “F*CK THE ELECTIONS.”

Meanwhile, the authors of the report question Google’s disclosures just before the Senate Intelligence Committee hearing in October 2017. At the time, the company put out a statement saying that none of the IRA-linked YouTube accounts was “targeted to the US or to any particular sector of the US population.” Yet the researchers found that, in fact, of the 1,100 total YouTube videos they discovered, 1,063 focused on police brutality and Black Lives Matter, 571 of which had keywords related to police and police brutality.1 While the statement was likely talking about advertising targeting, the report’s authors believe that it “appears disingenuous.” You can read the full report at the bottom of this story.

“We conducted an in-depth investigation across multiple product areas, and provided a detailed and thorough report to investigators. As we said at the time, videos on YouTube are viewable by anyone. Users can create videos intended for certain audiences, but there is no way to target by race on Google or YouTube,” a Google spokesperson said in a statement Monday afternoon.2

The focus on police brutality and content targeting African Americans wasn’t limited to YouTube. Among more than a dozen web domains the IRA registered, the vast majority, including and, were aimed at black communities. Of the 33 most popular Facebook pages linked to the IRA, nearly half focused on black audiences. This effort was particularly successful on Instagram, where the account @blackstagram_ amassed more than 300,000 followers and elicited more than 28 million reactions. Much of this content seemed designed to stoke distrust among African Americans in democratic institutions and depress black turnout for Democratic candidate Hillary Clinton.

Conversations around the IRA’s operations traditionally have focused on Facebook and Twitter, but like any hip millennial, the IRA was actually most obsessive about Instagram. “Instagram was perhaps the most effective platform for the Internet Research Agency,” the New Knowledge researchers write. All in, the troll accounts received 187 million engagements on Instagram, and about 40 percent of the accounts they created had at least 10,000 followers.

That isn’t to say, however, that the trolls neglected Twitter. There, the IRA deployed 3,841 accounts, including several personas that “regularly played hashtag games.” That approach paid off; 1.4 million people engaged with the tweets, leading to nearly 73 million engagements. Most of this work was focused on news, while on Facebook and Instagram, the Russians prioritized “deeper relationships,” according to the researchers. On Facebook, the IRA notched a total of 3.3 million page followers, who engaged with their politically divisive content 76.5 million times. Russia’s most popular pages targeted the right wing and the black community. The trolls also knew their audiences; they deployed Pepe memes at pages intended for right-leaning millennials, but kept them away from posts directed at older conservative Facebook users. Not every attempt was a hit; while 33 of the 81 IRA Facebook pages had over 1,000 followers, dozens had none at all.

That the IRA trolls aimed to pit Americans against each other with divisive memes is now well known. But this latest report reveals just how bizarre some of the IRA’s outreach got. To collect personally identifying information about targets, and perhaps use it to create custom and Lookalike audiences on Facebook, the IRA’s Instagram pages sold all kinds of merchandise. That includes LGBT sex toys and “many variants of triptych and 5-panel artwork featuring traditionally conservative, patriotic themes.”

The IRA also worked to recruit offline converts with job listings, some of which reveal just how low the trolls were willing to go to carry out their plot. One Facebook page called Army of Jesus offered free counseling to people with sexual addiction, using ads that read “‘Struggling with addiction to masturbation? Reach out to me and we will beat it together’ - Jesus.”

The report also points out new links between the IRA’s pages and Wikileaks, which helped disseminate hacked emails from Clinton campaign manager John Podesta in the weeks leading up to the election. On October 4, 2016, days before the first email dump, the researchers found Facebook and Instagram posts about Wikileaks founder Julian Assange, which “reinforc[ed] his reputation as a freedom fighter.”

It’s important to stress that all of this represents organic activity—that is to say, Russian presence unrelated to the relatively small ad spend that Facebook executives pointed to as the story first unfolded, in what the report authors describe as an attempt to downplay the problem. The authors also note that even silly memes can change minds. “While many people think of memes as “cat pictures with words,” the Defense Department and DARPA have studied them for years as a powerful tool of cultural influence, capable of reinforcing or even changing values and behavior.

The researchers can’t say whether any of this propaganda actually influenced the election. That’s partly to do with the squishy nature of measuring political persuasion and partly to do with the fact that some key data remains missing. The researchers had no access, for example, to user comments or conversion data that might have helped illuminate the impact this content had.

What these millions of digital artifacts do show, when taken together, is just how much planning and coordination went into the IRA’s scheme. Between the Twitter handles, the Facebook pages, the Instagram posts, the YouTube personalities, the fake local news sites, and in at least one case, a phony geopolitical think tank, the trolls created their own mini-internet to prop up Trump and spread distrust in his opponent and the election system itself. What’s more, their efforts remain ongoing, years after the election. Using the trove of data, the researchers were able to uncover even more IRA-linked Facebook pages, including one that was updated as recently as May of this year.

All of this demonstrates, according to the report authors, that “over the past five years, disinformation has evolved from a nuisance into high-stakes information war.” And yet, rather than fighting back effectively, Americans are battling each other over what to do about it. “We have conversations about whether or not bots have the right to free speech, we respect the privacy of fake people, and we hold congressional hearings to debate whether YouTube personalities have been unfairly downranked,” the report reads. “It is precisely our commitment to democratic principles that puts us at an asymmetric disadvantage against an adversary who enthusiastically engages in censorship, manipulation, and suppression internally.”

Additional reporting by Brian Barrett.

1 Updated, 12/17/18, 10:15 AM EDT: This story was updated to clarify that the YouTube videos were related to police brutality and Black Lives Matter, not just police brutality.

2 Updated, 12/17/18, 4:05 PM EDT: This story was updated with comment from Google.

More Great WIRED Stories


This little-known meme site has hosted two mass shooting threats this month

Created in 2011, iFunny describes itself as a "community for meme lovers and viral memes around the Internet." And indeed, the iFunny homepage is full of your standard internet schlock, including screenshots of tweets or tumblr posts, GIFs of "The Office," trending TikToks and Area 51 jokes.
Yet on August 7, the FBI arrested an 18-year-old Ohio man who allegedly threatened to shoot federal law enforcement officers in a post on iFunny. And this past Friday, federal agents arrested a 19-year-old Chicago man for threatening to kill people at a women's reproductive health clinic in a post on iFunny.
In the past year, the online message boards Gab and 8chan, both rife with racist or anti-Semitic messages, have faced scrutiny for their roles as homes to extremists who carried out mass shootings in Pittsburgh, California, New Zealand and El Paso.
iFunny has had its own issues with extremists and white supremacy, and as BuzzFeed News documented last week, the site was full of footage from and praise for the Christchurch mosque attacks in March.
But extremism is not exclusive to those sites. And the fact that extremist threats came on a meme site is not out of the ordinary.
Since the shootings in Dayton and El Paso early this month, there have been 27 arrests in 26 instances across the US by law enforcement. These came in the form of alleged online threats, texts and posts on a number of social media platforms.
"It's not just in spaces that are specifically tailored for only far-right extremist content," said Keegan Hankes, who studies online extremism for the Southern Poverty Law Center. "You have extremists going to other more broad forums and seeding these ideas and trying to find followers."
Still, despite Hankes' expertise on online extremism, he said he had not heard of iFunny until these recent arrests. That speaks to a broader problem with online extremism -- "it's really diffuse," he said.
"These individuals are on all sorts of different platforms, whether it's Discord, Facebook, Instagram, (or) iFunny," he said. "They are all over the place, and there are a lot more of them than I think many people think."

Two arrests for threats this month

Servers drop 8Chan, website linked to El Paso shooting
Servers drop 8Chan, website linked to El Paso shooting03:30
Overall, iFunny's content is not all that different from mainstream internet meme sites like Reddit or 9GAG.
Users can click on links in the "memes catalog" along the left side of the homepage featuring general topics that skew toward a young male audience, including Cars, Gaming, Girls, and Sports.
The website has a decently sized audience across its platforms. Its Facebook page has more than 560,000 likes, and its app was the 57th-most popular entertainment app on Apple's App Store as of Wednesday morning. That put it well below TikTok, Netflix, or YouTube Kids, but it's near apps like MTV or Sling TV.
Posts on iFunny reached the FBI's radar in February when the FBI's field office in Anchorage observed multiple posts from an account that discussed supporting mass shootings, as well as assault and/or targeting of Planned Parenthood, according to a criminal complaint.
The FBI subpoenaediFunny for the account owner's information and received a Gmail account, and a subsequentsubpoena to Google then returned his name and IP address. That led authorities to Justin Olsen, an 18-year-old from Boardman, Ohio, the complaint states.
In iFunny posts from June, Olsen discussed the 1993 siege in Waco, Texas, on iFunny, the complaint says. He blamed the Bureau of Alcohol, Tobacco, Firearms and Explosives for the deaths of scores of people living at the site and threatened to shoot federal agents.
He was arrested August 7, and authorities found more than 10,000 rounds of ammunition and 25 guns, including an AR-15, according to charging documents. He allegedly told an FBI agent that his online comments were a joke and referred to it as a "hyperbolic conclusion based on the results of the Waco siege ... where the ATF slaughtered families."
Olsen is charged with threatening to assault a federal law enforcement officer. CNN has reached out to his attorney Ross T. Smith for comment.
Just over a week later, the FBI arrested 19-year-old Farhan Sheikh for a series of posts threatening to kill people at a women's health clinic. He was arrested Friday and charged with transmitting a threat in interstate commerce, federal prosecutors said.
Around August 13, the FBI learned that a user on iFunny had posted a threat, according to court papers. In the post, Sheikh allegedly wrote: "I am done with my state and thier (sic) bullsh*t abortion laws and allowing innocrnt (sic) kids to be slaughtered for the so called 'womans (sic) right,'" according to an FBI affidavit.
Sheikh wrote that he planned to go to the clinic, which was about 4 miles from his home, on August 23 and "proceed to slaughter and murder any doctor, patient or visitor," an affidavit said.
One of the posts referenced the account handle tied to Olsen and said they arrested him "for no reason except supressing us and our freedoms."
He also posted that his iFunny account was "NOT a satirical account. I post what I mean, and i WILL carry out what I post," court papers said.
On August 13, FBI agents searched his home. Sheikh told FBI agents he thought they had come because of a "joke" he posted on iFunny, court papers said.
Sheikh was "detained as a danger to the community," a Chicago federal judge ruled Tuesday, saying that he is "inherently and deeply unstable." Sheikh's public defender declined to comment Monday.

iFunny executive pledges change

Vladimir Zakoulov, the co-founder of iFunny's parent company FunCorp, told CNN in an email that the site was "shocked" by the arrests.
"As a company we have always supported equality and have been against the restriction of any human right," he said.
He said iFunny would be strengthening its moderation system going forward to prevent future issues.
"We have a rather strong moderation system. Actually it is quite successful in most cases, but due to what happened in (the) last few days we decided to make it even stronger," he said.
The site's posted guidelines ban threats, cyberbullying, violent politics and hateful propaganda, but they do allow what they refer to as "dark humor" on taboo subjects.
Zakoulov claimed that the site uses artificial intelligence and "manual pre-moderation" to filter content that violates its guidelines. He said iFunny also relies on volunteer moderators who review content in their free time, and has a team that examines content that has been flagged by a user or moderator.
Going forward, he said iFunny "will be more focused on the semantics of the uploaded content ... to prevent any criminal actions and any manifestations of intolerance."
He said the site removed all of the accounts that can be potentially involved in hate speech, and said that iFunny will be hiring a few active users to focus on hate speech and intolerant behavior.

CNN's Dave Alsup, Marlena Baldacci, Darran Simon and Bill Kirkos contributed to this report.

  1. Tesla dealership ri
  2. Anymore roblox id
  3. Diy squat machine
  4. Live truck race

Pixalate is publishing its findings relating to the ‘Matryoshka’ ad fraud scheme utilizing the iFunny app on both iOS and Android devices, with slightly different methodologies on each device type.

This scheme has impacted at least two million iOS and Android users, with well in excess of $10 million siphoned from advertisers in 2020, according to Pixalate’s estimates.

The scheme, which Pixalate began tracking in 2019 and was active into at least October 2020, makes unauthorized use of – and misappropriates – consumers’ personal information.

Because most of the ad fraud discoveries published in the advertising industry address schemes specific to Android, this post will focus primarily on the iOS branch of the scheme, with a section at the end dedicated to the Android branch. We have included the malicious scripts used on both iOS and Android. In addition to surfacing an ad fraud scheme impacting iOS users, Pixalate also believes this scheme highlights a significant consumer privacy vulnerability.


Among Pixalate’s key observations of the apparent ad fraud scheme:

    • Consumer DataAt Risk: Latitude and longitude coordinates, device ID, and IP address are among the consumer data points used as part of the scheme, all of which are transferred to fraudster-designated endpoints.
    • Impacting Certain Battleground U.S. States: The majority (>90%) of invalid traffic (IVT) related to this scheme and flagged by Pixalate occurred in the U.S. Within the U.S., some battleground states have been impacted disproportionately: Pennsylvania, Michigan, and Wisconsin users account for less than 10% of real iFunny traffic* but were the subjects of the attacks nearly 25% of the time in Q3 2020.
    • iOS and Android ad fraud scheme: This ad fraud scheme occurs on both iOS and Android devices, using slightly different methodologies to carry out the scheme. 

Table of Contents

How the ‘Matryoshka’ ad fraud scheme works

Here is a video showing the scheme in action:


Here’s how the scheme works on iOS:

  1. The scammers buy a valid banner ad opportunity on iFunny
  2. The creative for the banner ad contains malicious JavaScript code
  3. The code runs and passes along personal information about the end-user (latitude and longitude coordinates, device ID, etc.) to a fraudster-designated endpoint
  4. The fraudster-designated endpoint return dozens to hundreds of VAST/VPAID tags for video ads that run in the background of the affected end-user’s phone. The ad requests intermingle spoofed information (such as pretending to be from an app other than iFunny) with real consumer data (like latitude and longitude coordinates)
  5. The ad requests are executed up to hundreds or even thousands of times, defrauding advertisers and using consumers’ phones and personal information without their knowledge or consent

On Android, the scheme is substantially similar but the implementation is slightly different (see below for details).

About iFunny

iFunny, which is registered in Seychelles but is purportedly Russian-owned, is a popular meme app with over 10 million Google Play Store downloads and is a top-50 Entertainment app on the Apple App Store.

Based on Pixalate’s research and diligence to date, this specific ad fraud scheme appears to be utilizing the iFunny app on both iOS and Android devices.

iFunny also requests access to the end-users’ precise (latitude and longitude) location. In the Android app Developer Guide, Google deems this a “dangerous permission.”

How the ad fraud scheme makes unauthorized use of consumer data

The consumer data utilized by the fraudsters as part of the scheme includes:

  • Latitude and longitude
  • Device ID
  • IP address

Importantly, Pixalate observed the as-yet-unidentified scammers capture and transfer these consumer data points to fraudster-designated endpoints. The screenshot below, taken in June 2020 on iOS, shows where the scammers log sensitive consumer data (latitude and longitude) to a third-party resource.

Capture of the malicious script on iOS from June 2020. Pixalate has blurred potentially sensitive information. Captured from the ad.js script

For California consumers affected by this scheme, the logging of personal information “for a purpose that the consumer would not reasonably expect,” and without the consumers’ authorization, appears to violate California Consumer Privacy Regulations, which became effective on August 14, 2020 (“CCPA Regulations”), including § 999.305(a)(1), § 999.305(a)(4) and § 999.305(b)(2).

The next screenshots shows latitude and longitude, IP address, and device ID shipped and stored on a fraudster-designated endpoint. As noted above, with respect to California consumers impacted by this scheme, the unauthorized transmission of such consumers’ personal information appears violative of the CCPA. 

Captured from the ad.js script

One documented unauthorized use of the consumer data is to blend it with spoofed elements as part of the ad fraud scheme. Below is a screenshot of an app (Angry Birds 2) being spoofed as part of this scheme, with real consumer data (latitude and longitude, device ID, and IP address) blended in.

Pixalate has blurred potentially sensitive information, as well as information regarding third party platforms utilized by the fraudsters to serve spoofed video ads

Other unauthorized uses – or sales – of the data cannot and should not be ruled out.

The scheme impacted some U.S. battleground states

After analyzing more than 1.75B impressions generated by iFunny devices in the U.S. in Q3 2020, Pixalate identified which states were disproportionately impacted by the scheme. In the table below, we show the top 10 states based on the ratio (“Impact Ratio”) of the share of voice (i.e. impression distribution) of spoofed vs. real iFunny impressions by U.S. state, among states accounting for at least 1% of iFunny’s valid traffic.* For example, the contribution of Pennsylvania to the spoofing was 3x larger than the contribution of Pennsylvania to the real iFunny traffic, etc.


Impact Ratio











New Jersey


North Carolina








The most likely reason battleground states including Pennsylvania, Michigan, and Wisconsin were highly impacted is because, typically, scammers behind ad fraud schemes seek to make the most money possible while avoiding detection, and these states experienced a lot of advertiser demand leading up to the election.

Additionally, IVT related to the ‘Matryoshka’ ad fraud scheme also utilized the personal information of California consumers, and their personal information was also transferred to fraudster-designated endpoints, which would potentially constitute a violation of the California Consumer Privacy Act of 2018 (CCPA).

The code behind the ‘Matryoshka’ ad fraud scheme

Now we’ll dive into the code running behind the scenes when the scammers carry out the attack. Here are the primary scripts utilized by the fraudsters:



The below steps detail the scheme on iOS:

  • Leverage the iFunny App: In order to gain background access to the phone while iFunny is running, the first thing the scammers do is buy a valid banner ad on iFunny and subsequentlyinject the creative for the ad with a malicious script (the “ad.js”). Pixalate commonly observed the same Jeep creative used as a front, but other creative fronts were used (and we, of course, have no reason to believe that this ad has any actual bona fide relationship with the Jeep brand). Pixalate also commonly observed used as the presumed “command and control” center of the scheme on iOS.


  • Capture consumers’ personal information: We see the scammers store real user information in a global object to be accessed at a later point. In the code snippet below, you can see IP address, device ID, and latitude and longitude coordinates stored in the global object.

Pixalate has blurred potentially sensitive information

  • Transfer consumers’ personal information to fraudster-designated endpoints: The code snippets below show how the as-yet-unidentified scammers are tracking and logging each step of the scheme. Of note, this code reveals data leakage. The first screenshot, taken in June 2020 on iOS, shows where the scammers log sensitive consumer data (latitude and longitude) to a third-party resource. The following two screenshots show latitude and longitude, IP address, and device ID shipped and stored on fraudster-designated endpoints.

Capture of the malicious script on iOS from June 2020. Pixalate has blurred potentially sensitive information. Captured from the ad.js script

Captured from the ad.js script

As noted previously, with respect to California consumers affected by the scheme, unauthorized transfer of such consumers’ personal information to fraudster-designated endpoints to be used in video ad spoofing – and perhaps for other unknown purposes – may violate CCPA Regulations.

  • Load the malicious “player_115.77_y”: Below we see the scammers load another malicious script. This script’s purpose is to request third-party VAST tags and parse them for impression and creative information. This script queries the control center, which sends back dozens or even hundreds of requests (per batch) for video ads on the background of the users’ phone. Note that we can see real consumer data once again passed along the chain.

Pixalate has blurred potentially sensitive information. Captured from the ad.js script

  • Execute the spoofing using real consumers’ personal information: The VAST tags sent from the command and control center contain spoofed data about which app the ad is for and are executed up to hundreds or even thousands of times. In the below example, we see Angry Birds 2 being spoofed. Note that real consumer data — including latitude and longitude, device ID, and IP address — is blended in with the spoofed information.

Pixalate has blurred potentially sensitive information, as well as information regarding third party platforms utilized by the fraudsters to serve spoofed video ads

Diagram breakdown of the scheme on iOS and Android

Below are high-level technical overviews of each step of the ad fraud scheme on iOS and Android.


  • The scammers buy a valid banner ad on iFunny, but the HTML creative includes malicious code (see the malicious code here)
  • The HTML file loads a ‘player_115.77_y.js’ script, passing it parameters about the current environment (see the script here)
  • The player performs an Ajax request to the spoofing server and passes parameters about the current environment, as well as data about the end-user
  • The spoofing server returns a JSON object containing URLs to various VAST/VPAID tags. The URLs contain spoofed data about which app the ad is for but also blend in real data about the end-user, such as device ID, IP address, and latitude and longitude coordinates. The number of URLs returned varies, but a typical request may return dozens or even hundreds of tag URLs per batch
  • The player fetches and executes these VAST and VPAID tags hundreds or even thousands of times


  • The scammers buy a valid banner ad on iFunny, but the HTML creative includes a ‘player.js’ script that contains base64-encoded JSON containing various URLs to VAST/VPAID tags
  • The ‘player.js’ script loads a ‘player_116.86_m.js’ script, which has the ability to fetch and execute these tags
  • The ‘player.js’ script passes the decoded JSON to the ‘player_116.86_m.js’ script through a variable set on the window
  • The ‘player_116.86_m.js’ script fetches the passed VAST/VPAID tags hundreds or even thousands of times
  • The script executes the fetched VAST/VPAID tags

Below is one of the spoofed VAST tags from the Android branch of the scheme, captured by Pixalate’s research team. In the below example, we see Daily Themed Crossword being spoofed, with spoofed elements blended in with real consumer data.

Pixalate has blurred potentially sensitive information, as well as information regarding third party platforms utilized by the fraudsters to serve spoofed video ads 

The apps spoofed most often in the ‘Matryoshka’ scheme

In the above example, we see Angry Birds 2 spoofed, but that is just one of several thousand apps that were spoofed as part of this scheme. Here are the top 10 apps spoofed on iOS and Android, respectively:

We’ve also shared the top 50 apps spoofed in the ‘Matryoshka’ ad fraud scheme on each device type, including bundle ID and app identifiers.

MRC and TAG guideline violations related to the ‘Matryoshka’ ad fraud scheme

As defined by theMedia Rating Council (MRC), the specific Sophisticated Invalid Traffic (SIVT) types identified in this scheme include elements of app misrepresentation (i.e. “spoofing”), manipulated activity, falsified measurement events, and malware that conducts deceptive actions. Similarly, such traffic is classified as manipulated behavior and false representation, as defined by the Trustworthy Accountability Group (TAG).

  1. 6-7 MRC IVT Guidelines (June 2020)
  • “Manipulated activity: Forced new browser window opening, forced tab opening, forced mobile application install (mobile re-direct), forced clicking behavior, tricking users to click / accidental clicks, clickjacking (UI redress attack) and hijacked measurement events”
  • “Falsified measurement events: visit, impression, viewability, click, location (specific to location falsification aimed at generating invalid ad activity, but not necessarily including validation of exact location for targeting purposes), referrer, consent string, conversion attribution and user attribute spoofing as well as Server Side Ad Insertion (SSAI) spoofing where applicable to a measurement organization”
  • “Domain and App misrepresentation: App ID spoofing, domain laundering and falsified domain / site location”
  • “Adware and Malware that conduct deceptive actions including ad injection and unauthorized overlays”
  1. p.7-8 TAG IVT Taxonomy 2.0
  • “False Representation: An ad request for inventory that is different from the actual inventory being supplied, including ad requests where the actual ad is rendered to a different website or application, device, or other target (such as geography).”
    • Examples: “Spoofed measurements, Domain Spoofing, Emulators Masquerading as Real User Devices, Parameter Mismatch (Inconsistencies in Transaction and Browser/Agent Parameters)”
  • “Manipulated Behavior: A browser, application, or other program that triggers an ad interaction without a user’s consent, such as an unintended click, an unexpected conversion, or false attribution.”
    • Examples: “Attribution Manipulation, Accidental Traffic, Forced New Window, Forced Installation of a Mobile Application”

Possible Apple App Store and Google Play Store policy violations

Whether or not app store policy violations have occurred is ultimately at the discretion of Google and Apple, respectively. Additionally, Pixalate is not seeking to assert or assign culpability via this disclosure. However, certain parts of Google’s Google Play Developer Distribution Agreement and Apple’s App Store Review Guidelines may be pertinent to any such inquiry, including:


    • Section 3.2.2 (iii), which states that artificially increasing the number of impressions of ads is “unacceptable”
    • Section 5, which states that “apps must comply with all legal requirements in any location where you make them available”
    • Section 5.1.1 (i), which relates to privacy policies and states that “all uses” of data collected by the app must be “clearly and explicitly” identified and that “any third-party” with which an app shares user data must “provide the same or equal protection of user data as stated in the app’s privacy policy”
    • Section 5.1.2 (i), which states the app must provide access to information about how and where consumer data will be used and that “apps that share user data without user consent or otherwise complying with data privacy laws may be removed from sale”
    • Section 5.1.5, which states “If your app uses location services, be sure to explain the purpose in your app”


    • Section 4.8, which states that if the product “stores personal or sensitive information provided by users,” the developer “agree[s] to do so securely….”
    • Section 4.9, which states that the developer will “not engage in any activity with Google Play, including making [the developer’s] Products available via Google Play, that interferes with, disrupts, damages, or accesses in an unauthorized manner the devices, servers, networks, or other properties or services of any third party”

Additionally, the Google Play Store Policy Center — Ads section may contain information relevant to any such inquiry, including:

  • “Ads must only be displayed within the app serving them.”
  • “Apps that extend usage of permission based device location data for serving ads” must make it “clear” to the user how the data is being used
  • “Ads associated with your app must not interfere with other apps … or the operation of the device….”

Indicators of Compromise

Domains observed by Pixalate:




  • -- events

* Based on programmatic ads sold, as measured by Pixalate, Q3 2020.


Pixalate is neither asserting nor assigning culpability with our research and insights. Is it our belief that our readers may be interested in learning more about ad fraud, particularly on iOS devices, as most of the mobile app ad fraud schemes uncovered to date have focused exclusively on Android.

For questions, please contact [email protected]


iFunny Has Become A Hub For White Nationalism

This week, an 18-year-old Ohio man was charged with threatening a federal officer. This came after law enforcement seized 15 rifles, 10 semiautomatic pistols, and 10,000 rounds of ammunition from his home. Justin Olsen came to the attention of authorities in the same way that several young white men who allegedly threatened to carry out mass shootings have — he posted about it online. The difference? Olsen’s main internet hangout wasn’t 8chan or Gab, but the meme-sharing website and app iFunny, where he posted under the name ArmyOfChrist, according to court documents.

Olsen’s iFunny account, it turns out, was just one node in a roiling hive of far-right activity. As of Wednesday, ArmyOfChrist was still online and had over 5,000 subscribers.

In the 200 posts on Olsen’s account, which were viewed by BuzzFeed News, he raged against feminists, progressives, the LGBTQ community, and religious and ethnic minorities, and repeatedly called for the establishment of a Christian ethnostate. Many of the memes he posted were fixated on the Crusades, fantasizing about a religious war between Christians and Muslims.

Olsen used his account to advertise a personal Discord channel, which had about 40 participants in it. It was in the Army Of Christ Discord that Olsen wrote, “In conclusion, shoot every federal agent on sight.”

Olsen’s server was active until Tuesday, after inquires from BuzzFeed News. Discord did not respond to requests for comment.

Since its creation in 2011, iFunny has been largely ignored by the mainstream internet. But the app, which is number 62 in the Entertainment category in Apple’s App Store and popular with teen boys — is owned by Russian developer Okrujnost and run by David Chef, known as Cheffy by the iFunny community. BuzzFeed News has reached out to Chef for comment.

iFunny, which is available online and as an app, is divided into sections featuring content curated by moderators, as well as a section to follow subscribed accounts.

Visiting the site reveals a heavily curated front page — your typical meme fare, screenshots of viral tweets, GIFs from Reddit, jokes about Minecraft — but digging below the surface, the picture darkens.

BuzzFeed News spoke to an iFunny user who requested anonymity and who mapped out the subterranean radicalized space.

“I’ve been using iFunny since it came out in 2011,” the anonymous user said. “These kind of things really picked up at the height of offensive conservative counterculture in 2016. I guess the ideas remained for a lot of the underground users, so as time went on they evolved to have a community that was similar to 8chan.”

The iFunny user provided BuzzFeed News with a list of larger radicalized accounts. The content shared there is on par with anything that was being posted on the now-offline 8chan. Following news of Olsen’s arrest this week, many of these accounts labeled themselves “satire” and shared memes about the FBI.

One account called RaceWar, which was on the list of radicalized accounts, has over 6,000 subscribers and has posted close to 20,000 times. Their posts alternate between mainstream memes and hardcore neo-Nazi propaganda. Another account on the list called Traditional_Nationalist, which has close to 20,000 subscribers, is full of white nationalist propaganda drawn from screenshots from 8chan. Zaoist, with around 9,000 subscribers, posts “trad Christian” content similar to what Olsen was sharing on his ArmyOfChrist account.

Like how 8chan’s users celebrate mass shooters, iFunny has its own radicalized icons. One user named Shaug is, according to the community, allegedly connected to a 2014 shooting threat at Land O’ Lakes High School in Florida. The user, Shaugureth, told BuzzFeed News that the whole thing was actually a prank orchestrated by another iFunny user and he was cleared of any wrongdoing. Another personality popular on iFunny is Samuel Woodward, an alleged member of the violent neo-Nazi group Atomwaffen Division, who was known on the app as Saboteur. Woodward was accused of killing his gay former classmate last year. He’s currently facing up to life in prison without parole. Woodward’s username is still active on iFunny, though it’s unclear who posts under it. Memes put up this week show Shaug and Woodward interacting with Olsen’s account following his arrest.


A meme depicting Olsen joining the ranks of other radicalized iFunny users Shaug and Samuel Woodward.

A spokesperson for iFunny on Wednesday told BuzzFeed News that the company had not seen any increase in radicalized activity.

“Honestly, we see the opposite,” they said. “We believe in a segregation of duties — we will continue the process of banning of the content that violates guidelines/law and the Authorities will continue to control the potential criminals in a real life.”

Earlier this week, an iFunny spokesperson told BuzzFeed News that they presumed the criminal activity happening on the app was a normal amount.

“We assume that percent of the potential criminals among them has 100% correlation with the percent of the potential criminals among the whole society,” a spokesperson for the site told BuzzFeed News.

In the initial email to BuzzFeed News, the spokesperson also bragged about iFunny’s Comscore, sending a PDF of the 2017 U.S. Mobile App Report that ranked iFunny the top-indexed app in the 18–24 demographic.

“iFunny is the most influential mobile app among young adults in the US,” the spokesperson said.

The anonymous iFunny user whom BuzzFeed News spoke to claimed that the radicalization taking over the site had been building for several years. “I can’t really pinpoint the start of these accounts, but I’d assume they started posting ironic and edgy memes and slowly shared their ideologies,” he said. “They started radicalizing because it’s kind of an echo chamber in the app.”


Ifunny russian

WIPO Arbitration and Mediation Center


Okruzhnost LLC v. WhoIs Privacy Protection Service, Inc./Six Media Ltd.

Case No. D2014-0373

1. The Parties

The Complainant is Okruzhnost LLC of Penza, Russian Federation, represented by Baker & McKenzie, Russian Federation.

The Respondent is WhoIs Privacy Protection Service, Inc. of Kirkland, Washington, United States of America (“USA”) / Six Media Ltd. of Hong Kong, China, represented by, USA.

2. The Domain Name and Registrar

The disputed domain name <> (“Disputed Domain Name”) is registered with eNom (the “Registrar”).

3. Procedural History

The Complaint was filed with the WIPO Arbitration and Mediation Center (the “Center”) on March 11, 2014. On the same day, the Center transmitted by email to the Registrar a request for registrar verification in connection with the Disputed Domain Name. Also on March 11, 2014, the Registrar transmitted by email to the Center its verification response disclosing registrant and contact information for the disputed domain name which differed from the named Respondent and contact information in the Complaint. The Center sent an email communication to the Complainant on March 12, 2014 providing the registrant and contact information disclosed by the Registrar and invited the Complainant to submit an amendment to the Complaint. The Complainant filed an amendment to the Complaint on March 17, 2014.

The Center verified that the Complaint together with the amendment to the Complaint satisfied the formal requirements of the Uniform Domain Name Dispute Resolution Policy (the “Policy” or “UDRP”), the Rules for Uniform Domain Name Dispute Resolution Policy (the “Rules”), and the WIPO Supplemental Rules for Uniform Domain Name Dispute Resolution Policy (the “Supplemental Rules”).

In accordance with the paragraphs 2(a) and 4(a) of the Rules, the Center formally notified the Respondent of the Complaint, and the proceedings commenced on March 18, 2014. As agreed by the Parties, the extended due date for Response was April 17, 2014. The Response was filed with the Center on April 17, 2014.

The Center appointed Gabriela Kennedy, Nicholas Weston and The Hon Neil Brown Q.C. as panelists in this matter on May 19, 2014. Each member of the Panel has submitted the Statement of Acceptance and Declaration of Impartiality and Independence, as required by the Center to ensure compliance with paragraph 7 of the Rules. The Panel finds that it was properly constituted.

4. Factual Background

The Complainant is a software company that develops mobile applications and websites. The Complainant launched a website “” and mobile application called iFunny in 2011, which allows users to create and share humorous photos, videos and comics. The Complainant is the owner of the IFUNNY trade mark registered in the Russian Federation on August 27, 2013 (with a priority date of July 5, 2012), and the IFUNNY trade mark registered in the USA on November 5, 2013 and September 17, 2013.

The Respondent is Six Media Ltd, a Hong Kong company. Since 2009, the Respondent has operated the website “”, which contains humorous posters and images. In July 2012 the Respondent acquired the Disputed Domain Name. The Disputed Domain Name resolves to a website that contains humorous content, photos and videos.

5. Parties’ Contentions

A. Complainant

The Complainant’s contentions can be summarized as follows:

(a) The Complainant relies on its trade mark rights in the IFUNNY mark, registered in the Russian Federation on August 27, 2013 (with a priority date of July 5, 2012), and in the USA on November 5, 2013 and September 17, 2013. The Complainant also claims common law rights in the IFUNNY trade mark dating back to April 2011, when it first began using it. The Complainant argues that its IFUNNY trade mark is identical to the Disputed Domain Name, save for the generic Top-Level Domain (“gTLD”) extension, which should not be taken into account.

(b) The Complainant argues that the Respondent has no rights or legitimate interests in the Disputed Domain Name, as it was registered and is being used in bad faith. The Respondent has not made any use of, or demonstrated any preparations to use, the Disputed Domain Name in connection with a bona fide offering of goods or services.

(c) The Complainant contends that the word “ifunny” is not a generic term. This is based on the figures provided by Google Trends which shows that the use of “ifunny” for any purposes whatsoever remained negligible until July 2011, which was three months after the Complainant launched its “” website and related application. Further, the Russian Federation and US trade mark offices found that the IFUNNY trade mark was not generic, and was sufficiently distinctive enough in order to enable the mark to be registered. The use of “ifunny” as a keyword in Google searches increased significantly three months after the Complainant launched its “” website and related application. The Complainant therefore argues that the word “ifunny” gained recognition and secondary meaning in connection with the Complainant’s products, and had become well-known by the time the Respondent acquired the Disputed Domain Name in August 2012.

(d) The website of the Disputed Domain Name contains links to a substantially similar resource (“”), which also contains a link to the Disputed Domain Name. The website in which the Disputed Domain Name resolves and “” have an almost identical concept of funny pictures, and same design and layout, and are hosted at the same IP address. The Complainant therefore argues that these two websites are controlled and operated by the same developers. The “” website has photos dating back almost two years prior to the Complainant’s launch of its products under the IFUNNY mark, and almost three years prior to the Respondent’s acquisition of the Disputed Domain Name. The Complainant therefore argues that the operators and developers of the website that the Disputed Domain Names resolves must have been aware of the Complainant and its IFUNNY products, since they are in the same industry. Further, the Complainant contends that a simple search prior to the acquisition of the Disputed Domain Name would have revealed the Complainant’s rights in the IFUNNY mark.

(e) For the foregoing reasons, the Complainant argues that the Respondent must have registered the Disputed Domain Name with the intent of taking advantage of the Complainant’s IFUNNY mark, by causing confusion between the Complainant’s mark and the Disputed Domain Name in order to divert consumers to the Respondent’s websites. Any use of the Disputed Domain Name in respect of products that compete with those of the Complainant cannot constitute bona fide offering of goods or services or fair use. The Complainant also asserts that the Respondent cannot claim to be using the Disputed Domain Name for legitimate noncommercial or fair use purposes, since the website that the Disputed Domain Name resolves contains sponsored advertisements.

(f) The Respondent is also allegedly not commonly known by the Disputed Domain Name, and the Respondent has attempted to conceal its identity through the use of a proxy service provider, anonymous feedback forms on its websites and failure to clearly identify itself on the terms of use and privacy policies on its websites.

(g) The Complainant maintains that the Respondent registered and is using the Disputed Domain Name in bad faith in order to intentionally attract users for commercial gain, by creating a likelihood of confusion with the Complainant’s IFUNNY mark. Users searching for the Complainant and its IFUNNY products may inadvertently visit the Disputed Domain Name, which could be easily confused with the Complainant’s products. The website that the Disputed Domain Name resolves also increases the likelihood of confusion. The Respondent should have carried out at least some minimum due diligence at the time it acquired the Disputed Domain Name – a simple Google search would have allegedly revealed the use by the Complainant of the IFUNNY mark.

B. Respondent

The Respondent’s contentions can be summarized as follows:

(a) The Respondent claims to have registered the Disputed Domain Name because it incorporates a generic term, i.e. it is comprised of the English dictionary word “funny”, and is preceded by the letter “i”, which commonly denotes the Internet.

(b) In 2008, the Respondent allegedly tried to acquire the domain name <>, but its offer was rejected. In 2009, the Respondent therefore purchased <>, and launched its website displaying humorous posters and images. The Respondent claims that it later decided to acquire another domain name with broader appeal. As such, the Respondent acquired the Disputed Domain Name on July 30, 2012, and launched a website related to humorous products and images, consistent with the generic meaning of the dictionary word “funny”. The Complainant only filed its IFUNNY trade mark applications on July 5 and July 30, 2012. The actual dates of registration of the Complainant’s trade mark are after the date the Disputed Domain Name was acquired by the Respondent. Therefore, the Respondent contends that it cannot be found to have purchased the Disputed Domain Name in order to target the Complainant, since the Complainant’s IFUNNY trade marks were not registered until a year after the Disputed Domain Name was acquired by the Respondent.

(c) The Respondent also applied for its own trade mark registration for IFUNNY in Hong Kong on July 12, 2013, before it allegedly became aware of the Complainant.

(d) Between September 2012 to September 2013, the Complainant contacted the Respondent to try and purchase the Disputed Domain Name from it. The Respondent informed the Complainant in January 2013 that the Disputed Domain Name was not for sale.

(e) Many third parties have registered other domain names incorporating the word “funny”, and preceded by either the letter “e” or “i”, long before the Complainant’s first alleged use of the IFUNNY mark (e.g. <>, <>, <>, etc). The Complainant must have been aware of the extensive and common third-party use of the word “funny” with the letters “i” or “e”, before it launched its products and began using the IFUNNY mark.

(f) The Respondent allegedly registered the Disputed Domain Name based on its common descriptive meaning, in order to profit off of the generic word, and not to target the Complainant or its IFUNNY mark. The Respondent has even registered and used other common word domain names to host comedic and humorous related websites, such as “”. The Respondent is actively using the Disputed Domain Name in connection with the dictionary meaning of the generic word “funny”. Further, the Respondent’s operation and active development in relation to humorous websites predates the Complainant. The Respondent therefore claims to have a legitimate interest and right in the Disputed Domain Name.

(g) The Respondent claims that there can be no finding of bad faith in the absence of any direct proof that the Disputed Domain Name, which incorporates a descriptive term, was used solely to profit from the Complainant’s mark. The Respondent argues that there is no evidence that it knew of the Complainant prior to acquiring the Disputed Domain Name. The Respondent allegedly registered the Disputed Domain Name because it incorporates common dictionary words, and has been using it in a descriptive manner to provide a humorous website.

(h) The use of a proxy service provider was legitimate, and there is no evidence that the Complainant was unable to contact the Respondent or that the Respondent was attempting to avoid the Complainant. The Complainant itself used a proxy service provider to register its <> domain name.

6. Discussion and Findings

Under paragraph 4(a) of the Policy, the Complainant is required to prove each of the following three elements:

(i) the Disputed Domain Name is identical or confusingly similar to a trade mark or service mark in which the Complainant has rights; and

(ii) the Respondent has no rights or legitimate interests in respect of the Disputed Domain Name; and

(iii) the Disputed Domain Name has been registered and is being used by the Respondent in bad faith.

A. Identical or Confusingly Similar

The Panel accepts that the Complainant has rights in respect of the IFUNNY trade mark on the basis of its trade mark registrations in the Russian Federation and the USA. The Panel notes that the Complainant did not register the IFUNNY mark until August 27, 2013, September 17, 2013 and November 5, 2013, which is over a year after the Disputed Domain Name was registered by the Respondent in 2012. However, the Panel refers to paragraph 1.4 of the WIPO Overview of the WIPO Panel Views on Selected UDRP Questions, Second Edition (“WIPO Overview 2.0”), which states that registration of a domain name before a complainant acquires trade mark rights in a name does not prevent a finding of identical or confusing similarity under the first element of the UDRP.

Whether the Complainant had rights in the trade mark at the time of registration of the Disputed Domain Names may be relevant to the consideration of the second and third element, but it is not relevant for the purposes of determining whether the Disputed Domain Name is confusingly similar to a trade mark in which the Complainant has rights.

It is now well-established that in making an enquiry as to whether a trade mark is identical or confusingly similar to a domain name, the gTLD extension, in this case “.com”, may generally be disregarded (see Rohde & Schwarz GmbH & Co. KG v. Pertshire Marketing, Ltd,WIPO Case No. D2006-0762).

Accordingly, the Panel finds that the Disputed Domain Name is identical to the Complainant’s registered mark, and that paragraph 4(a)(i) of the Policy is satisfied.

B. Rights or Legitimate Interests

Paragraph 2.1 of the WIPO Overview 2.0 states that once a complainant establishes a prima facie case in respect of the lack of rights or legitimate interests of a respondent, the respondent then carries the burden of proving otherwise. Where the respondent fails to do so, a complainant is deemed to have satisfied paragraph 4(a)(ii) of the Policy.

The Panel accepts that the Complainant has not granted the Respondent a licence to use the Complainant’s IFUNNY mark. The Panel further notes that the Respondent has not provided any evidence to demonstrate that it has become commonly known by the Disputed Domain Names. Accordingly, the Panel is of the view that a prima facie case is established and it is for the Respondent to prove it has rights or legitimate interests to the Disputed Domain Name.

It is widely accepted that even where a complainant owns trade mark registrations for a generic or descriptive term that has been incorporated in a disputed domain name, this does not necessarily mean that the disputed domain name should be automatically transferred to the complainant (see U.S. Nutraceuticals, LLC v. Telepathy, Inc. c/o Development Services, NAF Claim No. 365884 <>; National Trust for Historic Preservation v. Barry Preston, WIPO Case No. D2005-0424 <>; The Landmark Group v., L.P., NAF Claim No. 285459 <>; Sweeps Vaccuum & Repair Center, Inc. v. Nett Corp., WIPO Case No. D2001-0031 <> and Allocation Network GmbH v. Steve Gregory, WIPO Case No. D2000-0016 <>).

Paragraph 2.2 of the WIPO Overview 2.0 sets out factors to be taken into account when determining whether or not a respondent may have rights or legitimate interests in a generic domain name. These factors include the fame of the trade mark in question; whether the respondent has registered other domain names incorporating generic terms; and whether the domain name is used in connection with a purpose relating to its generic meaning.

In this case, the Disputed Domain Name consists of the generic word “funny”, and the letter “i”, which the Panel accepts is commonly used to refer to the Internet. The Disputed Domain Name resolves to a website that contains humorous content and photos (“Website”). The Respondent therefore appears to be using the Disputed Domain Name in a descriptive sense to describe the type of content that is made available via the Disputed Domain Name, i.e. the provision of funny content via the Internet.

As such, in the absence of any direct evidence that the Respondent is targeting the Complainant in any way through use of the Disputed Domain Name, rather than simply using the Disputed Domain Name in a descriptive sense, the Panel is prepared to find that the Respondent’s use of the Disputed Domain Name and the Website constitutes legitimate use under the Policy (see Sweeps Vacuum & Repair Center, Inc. v. Nett Corp., WIPO Case No. D2001-0031; and National Trust for Historic Preservation v. Barry Preston, WIPO Case No. D2005-0424).

The Panel considers its finding supported by other relevant factors, such as registration of the domain name <> in 2009, which also contains generic words or phrases, and the lack of evidence provided by the Complainant as to its fame and well-known status outside of the USA, particularly in Hong Kong where the Respondent is located. The Panel’s finding is also supported by the fact that the Respondent has been involved in the business of operating a website (“”) which contains humorous content and photos since 2009, at least two years prior to the Complainant’s launch of its IFUNNY products. The registration and use of the Disputed Domain Name and Website therefore appears consistent with the Respondent’s pre-existing business operations.

In light of the above, the Panel finds that the Respondent has shown that it has rights or legitimate interests in the Disputed Domain Name and that the Complainant has failed to satisfy paragraph 4(a)(ii) of the Policy.

C. Registered and Used in Bad Faith

In light of the Panel’s finding under the second limb, the Panel need not consider whether or not the Disputed Domain Names were registered and used in bad faith.

7. Decision

For the foregoing reasons, the Complaint is denied.

Gabriela Kennedy
Presiding Panelist

Nicholas Weston

The Hon Neil Brown Q.C.
Date: June 2, 2014

Meanwhile in russia. Funny russia

There were such, well, I haven't had them for a long time, so everything grew together. Already I was pressed against the wall, bending a little with a half-arm, he rubbed my butt with his head with his. Head, tearing everything apart, after which I got doubts about whether it would enter. that she entered and stopped.

A push and I finished everything and again used.

You will also like:

My rapprochement took place a little differently, more precisely, they completely forgot about me, and I trudged along, crumpling a long-extinguished cigarette in. My teeth. From time to time he threw jokes, after which the conversation ended, and an awkward pause reigned.

434 435 436 437 438