AI Can Filter Online Material By Content

This is the future:

The UK government has unveiled a tool it says can accurately detect jihadist content and block it from being viewed.

Home Secretary Amber Rudd told the BBC she would not rule out forcing technology companies to use it by law.

The government provided £600,000 of public funds towards the creation of the tool by an artificial intelligence company based in London.

ASI Data Science said the software can be configured to detect 94% of IS video uploads.

Anything the software identifies as potential IS material would be flagged up for a human decision to be taken.

The company said it typically flagged 0.005% of non-IS video uploads. On a site with five million daily uploads, it would flag 250 non-IS videos for review.

It is intended to lighten the moderation burden faced by small companies that may not have the resources to effectively tackle extremist material being posted on their sites.

This will be applied at first to IS recruiting videos, and it is there they will gain the right to force it legally upon everyone. But it will quickly shift to being used against right-leaning material opposing immigration, opposing leftist policies, and making fun of SJWs. And once it has the force of law, it will be used prodigiously.

It would never be the death of the right, but it will be an obstacle we will have to overcome. I assume we will have to at some point switch to a twilight-web model, sort of more peer to peer, vs the current client-server model we are presently using, so as to bypass the censors.

As always, the more the SJWs infect any area with their ideology, the more people will flock out of it, and to opposing technologies.

Tell everyone about r/K Theory, because the internet will not always be this free

This entry was posted in Conspiracy, Liberals, Politics, rabbitry, Technology. Bookmark the permalink.
0 0 votes
Article Rating
Subscribe
Notify of
guest

9 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
LembradorDos6Trilliões
LembradorDos6Trilliões
6 years ago

That is why I am asking for people to test d.tube with that very targeted for censorship video about white genocide, so then I can ask some UK e-fren to see if they can see it on d.tube.

Video in question:
https://www.onlinevideoconverter.com/success?id=e4f5f5c2d3j9i8i8c2b1

D.tube:
https://d.tube

Brian Bonner
Reply to  LembradorDos6Trilliões
6 years ago

There is also http://www.pewtube.com which is free and censor free

LembradorDos6Trilliões
LembradorDos6Trilliões
Reply to  Brian Bonner
6 years ago

Thanks!

Sentinel
Sentinel
6 years ago

A great thing about machine learning algorithms is that they never work properly when people deliberately try to screw with them. This has been well known ever since the late ’90s, when antivirus companies experimented with machine learning malware detection algorithms and found that while they worked really well in controlled environments, when people actually began trying to avoid filters the true positive identification rate fell to low double digit percentages at best and the false positive rate increased to >10%. The same results have been found with other applications of machine learning, like facial recognition and stylometry (determining the author of a bit of text based on writing patterns).

So if this is ever deployed against ungood political ideologies, I have no doubt that it will be possible to screw with the algorithm so that it will be unable to accurately identify targeted videos, or will have to be tuned to be so strict that it identifies a third of all videos posted as “alt-right” videos.

The solution to all of this is a P2P distributed internet like Freenet. But HTTP is far more convenient, so nobody uses Freenet except pedos and people with a technical interest in the network.

LembradorDos6Trilliões
LembradorDos6Trilliões
Reply to  Sentinel
6 years ago

Thanks for the info. What do you think about d.tube? Is it really impossible to censor the content posted there? I wish I was less of a retard in order to help better, but that is life.

Sentinel
Sentinel
Reply to  LembradorDos6Trilliões
6 years ago

>what do you think about d.tube
I haven’t heard of it until now, but if my five minutes of looking up info is right, it’s essentially youtube where videos are stored on the Steem blockchain, and you search for videos by looking through the blockchain from a javascript application running in your browser. In that case, it should be fully decentralized.

I haven’t looked through the javascript though, so I can’t say anything about the safety of the site. If the javascript used for dtube sends any information to an external site (there’s a lot of profiling information that can be gathered through javascript), that would be very bad. If it allows third party advertisement, it would also be bad, because that would mean allowing untrusted third parties to run javascript in your browser.

I don’t know if dtube does either of these things, but I’ll have to look through the javascript first to be sure.

If dtube doesn’t do those two things, the use of javascript (instead of a plugin like silverlight or native code compiled through NaCl) is good, as it limits the possibility of vulnerabilities existing in the site, especially if all the site does is access the steem blockchain.

>is it impossible to censor the content
If my understanding of dtube is correct, the short answer is no. The long answer is “it would require either breaking SHA-256 or the FBI/NSA/deep state owning 51% of the steem network.”

Blockchains are public, decentralized, and immutable*. There is no central source that can be attacked or modified; the blockchain is stored on every computer that is mining Steem and can be viewed or downloaded by anyone who wants to do so. It is also self-regulating. All nodes verify that each new block is correct before propogating it through the network. If the NSA were to tamper with a block, it would become obviously bad, and none of the other nodes would propogate it, and would instead propogate the correct block.

This all changes if the NSA controls more than half the nodes on the network. Because blocks are verified by what is essentially a community vote, if the NSA owns more than half the nodes, they can pretend that their tampered blocks are legitimate and propogate them, while blocking any blocks they dislike as being illegitimate, even if they are correct blocks. This is referred to as a Sybil attack.

*A blockchain is immutable so long as the proof of work algorithm used, SHA-256 in the case of Steem, is not vulnerable to a preimage attack (an attack where, given a hash output, an attacker can find an input that produces the given output). No preimage attacks against SHA-256 are known. In fact, feasible preimage attacks (attacks faster than simply brute forcing) are very rare — none have even been found for obsolete algorithms such as SHA-1 or MD5. Collision attacks are far more common, but do not affect the security of a blockchain.

All of this should remain true even in the face of quantum computers. SHA-256 provides 256 bits of preimage resistance security in the face of preimage attacks by conventional computers, which means it should provide 128 bits of security against even quantum computers running a generalized Grover’s algorithm — far more than enough to provide security against any kind of brute force attack even the NSA could ever attempt.

LembradorDos6Trilliões
LembradorDos6Trilliões
6 years ago

https://youtu.be/GWcVz4S5gqE

Good video on research into MIC and deep state, by https://mobile.twitter.com/fedupwithswamp?lang=en

everlastingphelps
everlastingphelps
6 years ago

In all fairness, you could just have it detect Arabic and come up with the same results.

trackback
6 years ago

… [Trackback]

[…] Read More here: anonymousconservative.com/blog/ai-can-filter-online-material-by-content/ […]