!By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy.

Recent Posts

Pages: [1] 2 3 ... 10
1
UnHackMe is a useful security tool that can save you a lot of headaches. It’s not an antivirus. It allows you to inspect and remove suspicious items manually.  It’s really best at fixing the issues, which antivirus programs do not.

The software provides an extra layer of security for your Windows system.

This security tool works in conjunction with your existing antivirus or Internet security suite. It’s compatible with all popular antivirus software.

The software identifies and protects your windows system against malicious software such as rootkits, Trojans, Spyware, keyloggers, unwanted processes, Popup ads, and PUPs.

It’s the only rootkit killer that monitors your system in real-time to detect and removes rootkit infections. The detection method of UnHackMe is very thorough, it double checks the Windows-based system.

The main difference between UnHackMe and other antirootkit software is the detection method. UnHackMe tries to detect the hidden rootkits by watching the computer from the early study of the boot process until the normal Windows mode.

The all-in-one toolbox for removing malware can also test Windows shortcuts, check the browser’s search settings and add-ons. Also, it can test the host’s file and DNS settings, and startup files using several antivirus programs.

Moreover, it has an intensive offline scanning feature that meticulously inspects your system when you boot from either a USB or a CD. It is commonly referred to as the warrior mode of UnHackMe. External checking is conducted by this tool when a computer will boot from a USB or a CD device. UnHackMe has a distinctive feature that double-checks your computer from any suspicious data codes, thus prohibiting this malicious software from getting the chance to run in your system.

This malware removal tool can also perform remote checking. This means that if someone you know is having a problem with their computer like they are suspecting of a suspicious activity that is going on in their system, you can ask them for the log file of this malicious program.

This Anti-rootkit software has a tab-based user-friendly interface, so it’s easy to use all features and options of the program. To scan your system for malicious programs hit the ‘Check Me Now’ button.

The software offers five options to scan your PC: multi-antivirus scan(online), Anti-Malware scan, Scan int the safe mode, scan before windows start, and scan in inspection mode.

This security tool uses very light system resources. So, Scanning your system with this tool doesn’t slow down your or PC affect the performance of the other apps running on your system.

UnHackMe also has the ‘Reanimator’ feature, which performs a full spyware check, and the built-in backup & restores feature allows you to recover system files and Windows to a previous state in case of a virus attack.

Key Features:
Easy to use interface
Rootkit revealer that can actually remove rootkits.
Detects and removes a variety of malware including Trojans and PUP.

Get UnHackMe 12.90 License for Free :
Download the giveaway version from here

Install and launch the software, on the main window click the “Register Now” button and enter the below license code.

License Key: U129-ES91-4KRZ-Q72P
Via Techsupport | Developer page
2
No, that headline isn’t a joke. Unfortunately, there’s a significant vulnerability that’s actively being exploited in the wild through Internet Explorer and Office, and Microsoft has released a patch to fix it. You need to update your PC to protect it as soon as possible.

Update to Fix This Zero-Day Exploit

We first reported on this issue last week, and now Microsoft has solved the zero-day exploit with a new Windows update.

The exploit used Office files containing malicious ActiveX controls that could grant a threat actor access by simply downloading a file. When the file is opened, it automatically launches a page on Internet Explorer that contains an ActiveX control. It then downloads malware onto the victim’s computer, which can be used for all sorts of things.

When the issue was first reported, we could only offer to be careful what you download. However, we can recommend updating your Windows PC to the latest version to fix this exploit.

As part of Microsoft’s Patch Tuesday, the company fixed a total of 66 security flaws, which is always welcomed. The first significant issue is mentioned above, but it also fixes two remote code execution vulnerabilities, the WLAN AutoConfig Service and Open Management Infrastructure.

Don’t Wait to Update!

If you’re using Windows, you need to download these updates to fix the critical security holes. Of course, you should still be careful when downloading files from unknown sources, but at least with this patch, you can rest easy knowing a gaping hole in your PC’s security has been closed.

source
3
Contrary to expectations, Microsoft appears to be enforcing their requirement for a TPM 2.0 module for Windows 11 virtual machines to be able to update to the latest version of Windows 11 for Insiders, build 22458.

Users on Twitter are reporting the following issue warning.



Microsoft recently clarified the requirements for Insiders to continue with the program, saying that Windows Insiders whose PCs do not meet minimum Windows 11 requirements (e.g. TPM 2.0) but who could still install the OS would be able to continue with the program.



It is unclear what has changed or if the interpretation of Microsoft’s original clarification has been incorrect. Hopefully, more information will become available shortly.

source
4
Microsoft / Microsoft rolls out passwordless login for all Microsoft accounts
« Last post by javajolt on September 15, 2021, 07:21:10 PM »
Microsoft is rolling out passwordless login support over the coming weeks, allowing customers to sign in to Microsoft accounts without using a password.

The company first allowed commercial customers to rollout passwordless authentication in their environments in March after a breakthrough year in 2020 when Microsoft reported that over 150 million users were logging into their Azure Active Directory and Microsoft accounts without using a password.

Rolling out to all Microsoft accounts

Starting today, Redmond announced that users are no longer required to have a password on their accounts.

Instead, they can choose between the Microsoft Authenticator app, Windows Hello, a security key, or phone/email verification codes to log into Microsoft Edge or Microsoft 365 apps and services.

"Now you can remove the password from your Microsoft account and sign in using passwordless methods like Windows Hello, the Microsoft Authenticator mobile app or a verification code sent to your phone or email," said Liat Ben-Zur, Microsoft Corporate Vice President.

"This feature will help to protect your Microsoft account from identity attacks like phishing while providing even easier access to the best apps and services like Microsoft 365, Microsoft Teams, Outlook, OneDrive, Family Safety, Microsoft Edge, and more."

As Microsoft Corporate Vice President for Security, Compliance, and Identity Vasu Jakkal added, threat actors, use weak passwords as the initial attack vector in most attacks across enterprise and consumer accounts. Microsoft detects 579 password attacks every second, with a total of 18 billion incidents each year.

"One of our recent surveys found that 15 percent of people use their pets' names for password inspiration. Other common answers included family names and important dates like birthdays," Jakkal said.

"We also found 1 in 10 people admitted reusing passwords across sites, and 40 percent say they’ve used a formula for their passwords, like Fall2021, which eventually becomes Winter2021 or Spring2022."

How to go passwordless right now

To start logging in to your Microsoft account without a password, you first need to install the Microsoft Authenticator app and link it to your personal Microsoft account.

Next, you have to go to your Microsoft account page, sign in, and turn on the 'Passwordless Account" under Advanced Security Options > Additional Security Options.

The last steps require you to follow the on-screen prompts and approve the notification displayed by the Authenticator app.

More info on using a passwordless method to sign in to your account is available on Microsoft's support website.

"Passwordless solutions such as Windows Hello, the Microsoft Authenticator app, SMS or Email codes, and physical security keys provide a more secure and convenient sign-in method," Microsoft explains.

"While passwords can be guessed, stolen, or phished, only you can provide fingerprint authentication, or provide the right response on your mobile at the right time."

source
5
Microsoft has already confirmed that Windows 11 version 21H2 will begin rolling out on October 5 and testers can join the Windows Insider program to try the early builds of the operating system.

PC makers like Intel, AMD, and Nvidia are slowly preparing their drivers and apps for Windows 11. Earlier this year, Intel and Nvidia rolled out new drivers compatible with both Windows 10 version 21H2 (October 2021 Update) and Windows 11 version 21H2. Today, AMD has finally published Windows 11-ready drivers.

AMD has published Ryzen driver version 3.09.01.140 for Windows 11. AMD has said that the update enables all features found in Radeon Software on Windows 11. These features include Radeon Boost, Radeon Anti-Lag, Radeon Image Sharpening, and more.

This update also addresses potential issues with Windows 11 on supported chipsets, including 400-series and 300-series.

“Windows 11 is just around the corner, and we know many users are participating in Microsoft’s Windows Insider Program and have access to an early build of Windows. If you’re one of those people, you can now take advantage of all the features found in Radeon Software,” AMD said in a statement.

If you’re interested, you can download the driver from the company’s website. The drivers will not enable Windows 11 support for unsupported PCs.

In addition to Windows 11 drivers, AMD has released Radeon Software Adrenalin 21.9.1 driver with support for a new feature called “Smart Access Memory” for Radeon RX 5000 graphics cards.

Windows 11 drivers and hardware requirements

As you’re probably aware, Microsoft has set a baseline of hardware requirements for Windows 11. Windows 11 is available for free download, but it cannot be installed via Windows Update on PCs powered by Intel’s sixth and seventh-generation processors, and AMD Ryzen CPUs older than Ryzen 2000.

In other words, if you have a device with an old processor like Ryzen 1000, it will not be able to run Windows 11. However, you can still download the drivers to improve support for Windows 11 and address bugs affecting the rounded corners of app windows.

In fact, Asus is testing Windows 11 BIOS and driver support for unsupported processors (6th and 7th gen Intel hardware).

In the release notes of firmware updates for STRIX Z270F, Asus confirmed that it is testing Windows 11 support for a motherboard that supports the Kaby Lake series of CPUs, alongside Celeron G3900 and above. These CPUs are officially not compatible with the new operating system.

Windows 11’s extended support currently remains limited to some motherboards and Intel CPUs, but it’s possible that companies like Asus aren’t done, so your motherboard with older Intel or AMD CPU may be eligible soon for Windows 11 compatibility soon.

source
6


We’ve heard a few things about the rumored Surface Pro 8 and Surface Go 3, and it seems like these devices are a step closer to launch. Microsoft recently applied for FCC certification for a “portable computing device”, likely to be the Surface Pro 8, and it seems to have appeared in the FCC certification this week.

The FCC certification is titled “Microsoft Corporation Portable Computing Device” with the following identification numbers: C3K2010, C3K-2010, C3K 2010, C3K2010, C3K2O1O, and C3K20I0. “Portable Computing Device” is a very generic term previously used for products like Surface Pro, Surface Go, and even Surface Laptop.

As we’re slowly gearing ourselves up for an approaching launch, it’s likely that we’ll learn more about the device in the coming days. Unfortunately, today’s FCC clearance tells us nothing about the unknown Surface product, but our sources have told us that Surface Pro 8 will ship with noticeable design changes.

Additionally, Microsoft has also received FCC approval for an Intel LAN module, possibly for the Surface Pro 8. The FCC ID is “1983” and the product is called “Wireless LAN Module”.



This filing indicates that Microsoft’s Surface Pro 8 or any other device from the lineup will support WiFi 6 from Intel’s AX201 adapter.

The external images feature Intel’s Wi-Fi 6 AX201D2W (WiFi 6 802.11ax13, Bluetooth 5.0, HE160, VHT160, CNVio, DFS), and this particular adapter supports Wi-Fi standard known as 802.11ax as well as the latest version of Bluetooth 5.

We’ve also spotted an FCC filing for “Wireless Input Accessory Device”, which could be a new mouse from the tech giant.

Surface Pro 8 and Surface Go 3 rumors

As we reported back in February, Surface Pro 7+ was meant for business users only and Microsoft has been internally testing Surface Pro 8 for consumers. The Surface Pro 8 is on its way and it will be announced during the company’s September 22 event and it will begin shipping in the next few weeks.

The design of the Surface Pro 8 will be familiar and we’re not expecting a complete overhaul, but you can expect minor design improvements. The device could have slimmer bezels, similar to the Surface Pro X, but only time will tell.

We’re also expecting Surface Go 3 with upgraded internal specs. If we had to guess, the Surface Go 3 will be equipped with the latest and Windows 11-compatible processor from Intel.

In addition to the new Surface Pro and Go models, Microsoft is also believed to be working on Surface Book and Laptop successor.

source
7
Ps5 | Ps4 | Ps3 / Five Things To Know About God Of War Ragnarok
« Last post by javajolt on September 14, 2021, 06:17:14 PM »


Like an ancient Norse myth that sat dormant for years before seeping into the zeitgeist, God of War Ragnarok is coming. First teased last year, the much-anticipated sequel to 2018’s God of War finally popped up last week as the splashy capstone of a PlayStation showcase.

The first significant gameplay reveals packed a lot into three minutes, including sweeping vistas and some moments of combat, all alongside characters old (Mimir! Freya! Brok!) and new (Thor! Angrboda!). God of War Ragnarok is still a ways off, with a broad 2022 release, but we’re officially at that moment in the hype cycle where the info drip-feed starts info-drip-feeding. Here’s what you should know.

It’ll be a single shot again.

Like its 2018 predecessor, God of War Ragnarok will deploy what’s colloquially known as the “guaranteed to make NYU film boys talk about your project” technique. In other words, it’ll play out over one, unbroken scene. Ragnarok narrative director Matt Sophos confirmed as much on Twitter.

A new director’s at the helm.

Cory Barlog, who directed the 2018 game, is passing the torch. Heading up Ragnarok is Eric Williams, who’s worked on the series since 2004. But Barlog is still involved with the game. Here’s Williams describing their collaborative process to IGN:

Quote
There were tough times, I’ll be straight up honest, where he was like, ‘Hey, I think you’re messing this part up. Look at it again.’ And I’d be like, ‘Okay, fuck you.’ And I’d walk out of the office and then come back like 10 minutes later and be like, ‘God damn it. You’re right. Let me go look at that.’ Then other times he’d be like, ‘Hey, I thought about that thing again. Maybe don’t put as much stock into that as I was saying.’

Thor’s not the only major newcomer.

You saw him (or, well, his hammer) pop up at the end of last week’s trailer. But the God of Thunder isn’t the only member of the Norse pantheon to make an appearance in Ragnarok. His dad, Odin, will also pop up, voiced by Richard Schiff, best known as Toby on The West Wing. Both father and son are presumably pissed about the deaths of their kin by Kratos’ violent hand in the first game.

You’ll finally be able to check out all nine realms.

Norse mythology famously features nine realms. The 2018 game let you visit, with varying degrees of depth, six of them: Alfheim, Helheim, Jotunheim, Midgard (where much of the game takes place), Muspelheim, and Nilfheim. Three others—Svartalfheim, Vanaheim, and the glimmering mead hall of Asgard—remained gated off. You’ll visit all nine in Ragnarok.



But, as Williams told IGN, you won’t just retread old ground for two-thirds of the game. Ragnarok is set in the throes of Fimbulwinter, an earth-changing event that precedes Ragnarok by three years. For Midgard, that means a layer of frost. For the other eight realms, though, it’s not necessarily a permanent winter. Expect changes to the six regions you visited in the first game.

Combat will be more vertical than in the last game.

In Ragnarok’s gameplay reveal trailer, you may have spotted a moment in which Kratos uses the Blades of Chaos—twin knives attached to long chains—to grapple to and shamble up a ledge. That’s because, unlike the first game, fights in God of War Ragnarok won’t just largely exist on a flat plane. Williams told IGN that Ragnarok will have some degree of verticality, and teased an “almost king of the hill”-style mode of combat.

Sorry, still no specific release date.

But when God of War Ragnarok does come out sometime next year, it’ll do so on both PlayStation 4 and PlayStation 5.

source
8
Social Media / Facebook Says Its Rules Apply to All 1/2
« Last post by javajolt on September 14, 2021, 04:12:09 PM »
Company Documents Reveal a Secret Elite That’s Exempt.

A program known as XCheck has given millions of celebrities, politicians, and other high-profile users special treatment, a privilege many abuse.

users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.

In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.

The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are “whitelisted”—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.

At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact-checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents.

A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and “not publicly defensible.”

“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”

Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.

In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems.


Source: 2019 Facebook internal review of the XCheck program, marked attorney-client privileged

In June, Facebook told the Oversight Board in writing that its system for high-profile users was used in “a small number of decisions.”

In a written statement, Facebook spokesman Andy Stone said criticism of XCheck was fair, but added that the system “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.”


At Facebook’s headquarters in Menlo Park, Calif.
PHOTO: IAN BATES FOR THE WALL STREET JOURNAL


He said Facebook has been accurate in its communications to the board and that the company is continuing to work to phase out the practice of whitelisting. “A lot of this internal material is outdated information stitched together to create a narrative that glosses over the most important point: Facebook itself identified the issues with cross-check and has been working to address them,” he said.


Internal documents

The documents that describe XCheck are part of an extensive array of internal Facebook communications reviewed by The Wall Street Journal. They show that Facebook knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands.

Moreover, the documents show, Facebook often lacks the will or the ability to address them.

This is the first in a series of articles based on those documents and on interviews with dozens of current and former employees.

At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.

Facebook’s stated ambition has long been to connect people. As it expanded over the past 17 years, from Harvard undergraduates to billions of global users, it struggled with the messy reality of bringing together disparate voices with different motivations—from people wishing each other happy birthday to Mexican drug cartels conducting business on the platform. Those problems increasingly consume the company.

Time and again, the documents show, in the U.S. and overseas, Facebook’s own researchers have identified the platform’s ill effects, in areas including teen mental health, political discourse, and human trafficking. Time and again, despite congressional hearings, its own pledges, and numerous media exposés, the company didn’t fix them.

Sometimes the company held back for fear of hurting its business. In other cases, Facebook made changes that backfired. Even Mr. Zuckerberg’s pet initiatives have been thwarted by his own systems and algorithms.

The documents include research reports, online employee discussions, and drafts of presentations to senior management, including Mr. Zuckerberg. They aren’t the result of idle grumbling, but rather the formal work of teams whose job was to examine the social network and figure out how it could improve.

They offer perhaps the clearest picture thus far of how broadly Facebook’s problems are known inside the company, up to the CEO himself. And when Facebook speaks publicly about many of these issues, to lawmakers, regulators, and, in the case of XCheck, its own Oversight Board, it often provides misleading or partial answers, masking how much it knows.


Facebook CEO Mark Zuckerberg, right, at a House Financial Services Committee hearing on Capitol Hill in 2019.
PHOTO: ANDREW HARNIK/ASSOCIATED PRESS


One area in which the company hasn’t struggled is profitability. In the past five years, during which it has been under intense scrutiny and roiled by internal debate, Facebook has generated a profit of more than $100 billion. The company is currently valued at more than $1 trillion.


Rough justice

For ordinary users, Facebook dispenses a kind of rough justice in assessing whether posts meet the company’s rules against bullying, sexual content, hate speech, and incitement to violence. Sometimes the company’s automated systems summarily delete or bury content suspected of rule violations without a human review. At other times, material flagged by those systems or by users is assessed by content moderators employed by outside companies.


Source: 2019 Facebook internal review of the XCheck program, marked attorney-client privileged

Mr. Zuckerberg estimated in 2018 that Facebook gets 10% of its content removal decisions wrong, and, depending on the enforcement action taken, users might never be told what rule they violated or be given a chance to appeal.

Users designated for XCheck review, however, are treated more deferentially. Facebook designed the system to minimize what its employees have described in the documents as “PR fires”—negative media attention that comes from botched enforcement actions taken against VIPs.


If Facebook’s systems conclude that one of those accounts might have broken its rules, they don’t remove the content—at least not right away, the documents indicate. They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.

Most Facebook employees were able to add users into the XCheck system, the documents say, and a 2019 audit found that at least 45 teams around the company were involved in whitelisting. Users aren’t generally told that they have been tagged for special treatment. An internal guide to XCheck eligibility cites qualifications including being “newsworthy,” “influential or popular” or “PR risky.”

Neymar, the Brazilian soccer star whose full name is Neymar da Silva Santos Jr., easily qualified. With more than 150 million followers, Neymar’s account on Instagram, which is owned by Facebook, is one of the most popular in the world.

After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself—and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.


Brazilian soccer star Neymar, left, in Rio de Janeiro in 2019.
PHOTO: LEO CORREA/ASSOCIATED PRESS


Facebook’s standard procedure for handling the posting of “nonconsensual intimate imagery” is simple: Delete it. But Neymar was protected by XCheck.

For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as “revenge porn,” exposing the woman to what an employee referred to in the review as abuse from other users.

“This included the video being reposted more than 6,000 times, bullying and harassment about her character,” the review found.

Facebook’s operational guidelines stipulate that not only should unauthorized nude photos be deleted, but that people who post them should have their accounts deleted.

“After escalating the case to leadership,” the review said, “we decided to leave Neymar’s accounts active, a departure from our usual ‘one strike’ profile disable policy.”

Neymar denied the rape allegation, and no charges were filed against him. The woman was charged by Brazilian authorities with slander, extortion and fraud. The first two charges were dropped, and she was acquitted of the third. A spokesperson for Neymar said the athlete adheres to Facebook’s rules and declined to comment further.

The lists of those enrolled in XCheck were “scattered throughout the company, without clear governance or ownership,” according to a “Get Well Plan” from last year. “This results in not applying XCheck to those who pose real risks and on the flip-side, applying XCheck to those that do not deserve it (such as abusive accounts, persistent violators). These have created PR fires.”


In practice, Facebook appeared more concerned with avoiding gaffes than mitigating high-profile abuse. One Facebook review in 2019 of major XCheck errors showed that of 18 incidents investigated, 16 involved instances where the company erred in actions taken against prominent users.

Four of the 18 touched on inadvertent enforcement actions against content from Mr. Trump and his son, Donald Trump Jr. Other flubbed enforcement actions were taken against the accounts of Sen. Elizabeth Warren, fashion model Sunnaya Nash, and Mr. Zuckerberg himself, whose live-streamed employee Q&A had been suppressed after an algorithm classified it as containing misinformation.


Pulling content

Historically, Facebook contacted some VIP users who violated platform policies and provided a “self-remediation window” of 24 hours to delete violating content on their own before Facebook took it down and applied penalties.

Mr. Stone, the company spokesman, said Facebook has phased out that perk, which was still in place during the 2020 elections. He declined to say when it ended.

At times, pulling content from a VIP’s account requires approval from senior executives on the communications and public-policy teams, or even from Mr. Zuckerberg or Chief Operating Officer Sheryl Sandberg, according to people familiar with the matter.

In June 2020, a Trump post came up during a discussion about XCheck’s hidden rules that took place on the company’s internal communications platform, called Facebook Workplace. The previous month, Mr. Trump said in a post: “When the looting starts, the shooting starts.”

A Facebook manager noted that an automated system, designed by the company to detect whether a post violates its rules, had scored Mr. Trump’s post 90 out of 100, indicating a high likelihood it violated the platform’s rules.

For a normal user post, such a score would result in the content being removed as soon as a single person reported it to Facebook. Instead, as Mr. Zuckerberg publicly acknowledged last year, he personally made the call to leave the post up. “Making a manual decision like this seems less defensible than algorithmic scoring and actioning,” the manager wrote.

Mr. Trump’s account was covered by XCheck before his two-year suspension from Facebook in June. So too are those belonging to members of his family, Congress, and the European Union Parliament, along with mayors, civic activists, and dissidents.


Those included in the XCheck program, according to Facebook documents, include, in the top row: Neymar,
Donald Trump, Donald Trump, Jr., and Mark Zuckerberg, and in the bottom row, Elizabeth Warren, Dan Scavino,
Candace Owens, and Doug the Pug.
PHOTO: ZUMA PRESS; GETTY IMAGES (3); REUTERS (2); ASSOCIATED PRESS; PRESS POOL


While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.

Mr. Stone acknowledged the concern but said the company had worked to address it. “We made multiple efforts to ensure that both in federal and nonfederal races, challengers as well as incumbents were included in the program,” he said.

The program covers pretty much anyone regularly in the media or who has a substantial online following, including film stars, cable talk-show hosts, academics, and online personalities with large followings. On Instagram, XCheck covers accounts for popular animal influencers including “Doug the Pug.”


Source: August 2020 Facebook internal presentation called "Political Influence on Content
Policy"


In practice, most of the content flagged by the XCheck system faced no subsequent review, the documents show.

Even when the company does review the material, enforcement delays like the one on Neymar’s posts mean content that should have been prohibited can spread to large audiences. Last year, XCheck allowed posts that violated its rules to be viewed at least 16.4 billion times, before later being removed, according to a summary of the program in late December.

Facebook recognized years ago that the enforcement exemptions granted by its XCheck system were unacceptable, with protections sometimes granted to what it called abusive accounts and persistent violators of the rules, the documents show. Nevertheless, the program expanded over time, with tens of thousands of accounts added just last year.

In addition, Facebook has asked fact-checking partners to retroactively change their findings on posts from high-profile accounts, waived standard punishments for propagating what it classifies as misinformation, and even altered planned changes to its algorithms to avoid political fallout.

“Facebook currently has no firewall to insulate content-related decisions from external pressures,” a September 2020 memo by a Facebook senior research scientist states, describing daily interventions in its rule-making and enforcement process by both Facebook’s public-policy team and senior executives.

A December memo from another Facebook data scientist was blunter: “Facebook routinely makes exceptions for powerful actors.”

source
part 2 ►
9
Social Media / Facebook Says Its Rules Apply to All 2/2
« Last post by javajolt on September 14, 2021, 04:10:03 PM »
◄ part 1


Flubbed calls

Mr. Zuckerberg has consistently framed his position on how to moderate controversial content as one of principled neutrality. “We do not want to become the arbiters of truth,” he told Congress in a hearing last year.

Facebook’s special enforcement system for VIP users arose from the fact that its human and automated content-enforcement systems regularly flub calls.

Part of the problem is resources. While Facebook has trumpeted its spending on an army of content moderators, it still isn’t capable of fully processing the torrent of user-generated content on its platforms. Even assuming adequate staffing and a higher accuracy rate, making millions of moderation decisions a day would still involve numerous high-profile calls with the potential for bad PR.

Facebook wanted a system for “reducing false positives and human workload,” according to one internal document. The XCheck system was set up to do that.

To minimize conflict with average users, the company has long kept its notifications of content removals opaque. Users often describe on Facebook, Instagram or rival platforms what they say are removal errors, often accompanied by a screenshot of the notice they receive.

Facebook pays close attention. One internal presentation about the issue last year was titled “Users Retaliating Against Facebook Actions.”

“Literally all I said was a happy birthday,” one user posted in response to a botched takedown, according to the presentation.

“Apparently Facebook doesn’t allow complaining about paint colors now?” another user complained after Facebook flagged as hate speech the declaration that “white paint colors are the worst.”

“Users like to screenshot us at our most ridiculous,” the presentation said, noting they often are outraged even when Facebook correctly applies its rules.

If getting panned by everyday users is unpleasant, inadvertently upsetting prominent ones is potentially embarrassing.

Last year, Facebook’s algorithms misinterpreted a years-old post from Hosam El Sokkari, an independent journalist who once headed the BBC’s Arabic News service, according to a September 2020 “incident review” by the company.

In the post, he condemned Osama bin Laden, but Facebook’s algorithms misinterpreted the post as supporting the terrorist, which would have violated the platform’s rules. Human reviewers erroneously concurred with the automated decision and denied Mr. El Sokkari’s appeal.

As a result, Mr. El Sokkari’s account was blocked from broadcasting a live video shortly before a scheduled public appearance. In response, he denounced Facebook on Twitter and the company’s own platform in posts that received hundreds of thousands of views.

Facebook swiftly reversed itself, but shortly afterward mistakenly took down more of Mr. El Sokkari’s posts criticizing conservative Muslim figures.

Mr. El Sokkari responded: “Facebook Arabic support team has obviously been infiltrated by extremists,” he tweeted, an assertion that prompted more scrambling inside Facebook.

After seeking input from 41 employees, Facebook said in a report about the incident that XCheck remained too often “reactive and demand-driven.” The report concluded that XCheck should be expanded further to include prominent independent journalists such as Mr. El Sokkari, to avoid future public-relations black eyes.

As XCheck mushroomed to encompass what the documents said are millions of users worldwide, reviewing all the questionable content became a fresh mountain of work.


Whitelist status

In response to what the documents describe as chronic underinvestment in moderation efforts, many teams around Facebook chose not to enforce the rules with high-profile accounts at all—the practice referred to as whitelisting. In some instances, whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.

“This problem is pervasive, touching almost every area of the company,” the 2019 review states, citing the audit. It concluded that whitelists “pose numerous legal, compliance, and legitimacy risks for the company and harm to our community.”


Facebook is trying to eliminate the practice of whitelisting, the documents show. Its headquarters in Menlo Park.
PHOTO: IAN BATES FOR THE WALL STREET JOURNAL


A plan to fix the program, described in a document the following year, said that blanket exemptions and posts that were never subsequently reviewed had become the core of the program, meaning most contents from XCheck users wasn’t subject to enforcement. “We currently review less than 10% of XChecked content,” the document stated.

Mr. Stone said the company improved that ratio during 2020, though he declined to provide data.

The leeway given to prominent political accounts on misinformation, which the company in 2019 acknowledged in a limited form, baffled some employees responsible for protecting the platforms. High-profile accounts posed greater risks than regular ones, researchers noted, yet were the least policed.

“We are knowingly exposing users to misinformation that we have the processes and resources to mitigate,” said a 2019 memo by Facebook researchers, called “The Political Whitelist Contradicts Facebook’s Core Stated Principles.” Technology website The Information previously reported on the document.

In one instance, political whitelist users were sharing articles from alternative-medicine websites claiming that a Berkeley, Calif., doctor had revealed that chemotherapy doesn’t work 97% of the time. Fact-checking organizations have debunked the claims, noting that the science is misrepresented and that the doctor cited in the article died in 1978.

In an internal comment in response to the memo, Samidh Chakrabarti, an executive who headed Facebook’s Civic Team, which focuses on political and social discourse on the platform, voiced his discomfort with the exemptions.

“One of the fundamental reasons I joined FB Is that I believe in its potential to be a profoundly democratizing force that enables everyone to have an equal civic voice,” he wrote. “So having different rules on speech for different people is very troubling to me.”

Other employees said the practice was at odds with Facebook’s values.

“FB’s decision-making on content policy is influenced by political considerations,” wrote an economist in the company’s data science division.

“Separate content policy from public policy,” recommended Kaushik Iyer, then lead engineer for Facebook’s civic integrity team, in a June 2020 memo.


Source: May 2019 comment from Samidh Chakrabarti, Facebook's then-head of Civic Integrity, commenting on
an internal presentation titled "The Political Whitelist Contradicts Facebook's Core Stated Principles."


Buzzfeed previously reported on elements of these documents.

That same month, employees debated on Workplace, the internal platform, about the merits of going public with the XCheck program.

As the transparency proposal drew dozens of “like” and “love” emojis from colleagues, the Civic Team’s Mr. Chakrabarti looped in the product manager overseeing the XCheck program to offer a response.

The fairness concerns were real and XCheck had been mismanaged, the product manager wrote, but “we have to balance that with business risk.“ Since the company was already trying to address the program’s failings, the best approach was “internal transparency,” he said.

On May 5, Facebook’s Oversight Board upheld the suspension of Mr. Trump, whom it accused of creating a risk of violence in connection with the Jan. 6 riot at the Capitol in Washington. It also criticized the company’s enforcement practices, recommending that Facebook more clearly articulate its rules for prominent individuals and develop penalties for violators.


In May, Facebook’s Oversight Board upheld the suspension of former President Donald Trump.
PHOTO: ANDREW HARRER/BLOOMBERG NEWS


As one of 19 recommendations, the board asked Facebook to “report on the relative error rates and thematic consistency of determinations made through the cross check process compared with ordinary enforcement procedures.”

A month later, Facebook said it was implementing 15 of the 19 recommendations. The one about disclosing cross check data was one of the four it said it wouldn’t adopt.

“It’s not feasible to track this information,” Facebook wrote in its responses. “We have explained this product in our newsroom,” it added, linking to a 2018 blog post that declared “we remove content from Facebook, no matter who posts it, when it breaks our standards.” Facebook’s 2019 internal review had previously cited that same blog post as misleading.

The XCheck documents show that Facebook misled the Oversight Board, said Kate Klonick, a law professor at St. John’s University. The board was funded with an initial $130 million commitment from Facebook in 2019, and Ms. Klonick was given special access by the company to study the group’s formation and its processes.

“Why would they spend so much time and money setting up the Oversight Board, then lie to it?” she said of Facebook after reviewing XCheck documentation at the Journal’s request. “This is going to completely undercut it.”

In a written statement, a spokesman for the board said it “has expressed on multiple occasions its concern about the lack of transparency in Facebook’s content moderation processes, especially relating to the company’s inconsistent management of high-profile accounts.”

Facebook is trying to eliminate the practice of whitelisting, the documents show and the company spokesman confirmed. The company set a goal of eliminating total immunity for “high severity” violations of FB rules in the first half of 2021. A March update reported that the company was struggling to rein in additions to XCheck.

“VIP lists continue to grow,” a product manager on Facebook’s Mistakes Prevention Team wrote. She announced a plan to “stop the bleeding” by blocking Facebook employees’ ability to enroll new users in XCheck.

One potential solution remains off the table: holding high-profile users to the same standards as everyone else.

“We do not have systems built out to do that extra diligence for all integrity actions that can occur for a VIP,” her memo said. To avoid making mistakes that might anger influential users, she noted, Facebook would instruct reviewers to take a gentle approach.

“We will index to assuming good intent in our review flows and lean into ‘innocent until proven guilty,’ ” she wrote.

The plan, the manager wrote, was “generally” supported by company leadership.


source
10
Apple / Apple patches an NSO zero-day flaw affecting all devices
« Last post by javajolt on September 14, 2021, 11:21:18 AM »
Citizen Lab says the ForcedEntry exploit affects all iPhones, iPads, Macs, and Watches

Apple has released security updates for a zero-day vulnerability that affects every iPhone, iPad, Mac, and Apple Watch. Citizen Lab, which discovered the vulnerability and was credited with the find, urges users to immediately update their devices.

The technology giant said iOS 14.8 for iPhones and iPads, as well as new updates for Apple Watch and macOS, will fix at least one vulnerability that it said: “may have been actively exploited.”

Citizen Lab said it has now discovered new artifacts of the ForcedEntry vulnerability, details it first revealed in August as part of an investigation into the use of a zero-day vulnerability that was used to silently hack into iPhones belonging to at least one Bahraini activist.

Last month, Citizen Lab said the zero-day flaw — named as such since it gives companies zero days to roll out a fix — took advantage of a flaw in Apple’s iMessage, which was exploited to push the Pegasus spyware, developed by Israeli firm NSO Group, to the activist’s phone.

Pegasus gives its government customers near-complete access to a target’s device, including their personal data, photos, messages, and location.

The breach was significant because the flaws exploited the latest iPhone software at the time, both iOS 14.4 and later iOS 14.6, which Apple released in May. But also the exploit broke through new iPhone defenses that Apple had baked into iOS 14, dubbed BlastDoor, which were supposed to prevent silent attacks by filtering potentially malicious code. Citizen Lab calls this particular exploit ForcedEntry for its ability to skirt Apple’s BlastDoor protections.

In its latest findings, Citizen Lab said it found evidence of the ForcedEntry exploit on the iPhone of a Saudi activist, running at the time the latest version of iOS. The researchers said the exploit takes advantage of a weakness in how Apple devices render images on the display.

Citizen Lab now says that the same ForcedEntry exploit works on all Apple devices running, until today, the latest software.

Citizen Lab said it reported its findings to Apple on September 7. Apple pushed out the updates for the vulnerability, known officially as CVE-2021-30860. Citizen Lab said it attributes the ForcedEntry exploit to NSO Group with high confidence, citing evidence it has seen that it has not previously published.

John Scott-Railton, a researcher at Citizen Lab, told TechCrunch that messaging apps, like iMessage, are increasingly a target of nation-states hacking operations and this latest find underlines the challenges in securing them.

In a brief statement, Apple’s head of security engineering and architecture Ivan Krstić confirmed the fix.

“After identifying the vulnerability used by this exploit for iMessage, Apple rapidly developed and deployed a fix in iOS 14.8 to protect our users. We’d like to commend Citizen Lab for successfully completing the very difficult work of obtaining a sample of this exploit so we could develop this fix quickly. Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers, and we are constantly adding new protections for their devices and data,” said Krstić.

NSO Group declined to answer our specific questions.

Updated with comment from Apple.

source
Pages: [1] 2 3 ... 10