Internet

Authors Guild Petitions Supreme Court to Rule on Google Copying Millions of Books Without Permission

Today, the Authors Guild, the nation’s largest and oldest society of professional writers, filed a petition with the Supreme Court of the United States requesting that it review a lower court ruling that allowed Google, Inc. to copy millions of copyright-protected books without asking for authors’ permission or paying them. At stake, the Guild claims, is the right of authors to determine what becomes of their works in the digital age. Read the full press release here.

From Authors Guild Petitions Supreme Court to Rule on Google Copying Millions of Books Without Permission - The Authors Guild

How ‘Do Not Track’ Ended Up Going Nowhere

The Electronic Frontier Foundation, which was also a member of the task force, took matters into its own hands. It released a final version of a free plugin called the Privacy Badger for Firefox and Chrome browsers in August. Whenever a user turns on Do Not Track within the browser setting, Privacy Badger acts as an enforcer — it scans any website to determine if the publisher has agreed to honor this privacy request. If it can’t find a policy, it scans for third-party scripts that appear to be tracking — and blocks them.

“At the core of our project is the protection of users’ reading habits and browsing history,” the EFF wrote in introducing Privacy Badger. “And a conviction that this is personal information that should not be accessed without consent.”

From How ‘Do Not Track’ Ended Up Going Nowhere | Re/code

Topic: 

It’s 2016 already, how are websites still screwing up these user experiences?!

We’re a few days into the new year and I’m sick of it already. This is fundamental web usability 101 stuff that plagues us all and makes our online life that much more painful than it needs to be. None of these practices – none of them – is ever met with “Oh how nice, this site is doing that thing”. Every one of these is absolutely driving the web into a dismal abyss of frustration and much ranting by all.

And before anyone retorts with “Oh you can just install this do-whacky plugin which rewrites the page or changes the behaviour”, no, that’s entirely not the point. Not only does it not solve a bunch of the problems, it shouldn’t damn well have to! How about we all just agree to stop making the web a less enjoyable place and not do these things from the outset?

From Troy Hunt: It’s 2016 already, how are websites still screwing up these user experiences?!

Topic: 

w3id.org - Permanent Identifiers for the Web

The purpose of this website is to provide a secure, permanent URL re-direction service for Web applications. This service is run by the W3C Permanent Identifier Community Group.

Web applications that deal with Linked Data often need to specify and use URLs that are very stable. They utilize services such as this one to ensure that applications using their URLs will always be re-directed to a working website. This website operates like a switchboard, connecting requests for information with the true location of the information on the Web. The switchboard can be reconfigured to point to a new location if the old location stops working.

There are a growing group of organizations that have pledged responsibility to ensure the operation of this website. These organizations are: Digital Bazaar, 3 Round Stones, OpenLink Software, Applied Testing and Technology, Openspring, and Bosatsu Consulting. They are responsible for all administrative tasks associated with operating the service. The social contract between these organizations gives each of them full access to all information required to maintain and operate the website. The agreement is setup such that a number of these companies could fail, lose interest, or become unavailable for long periods of time without negatively affecting the operation of the site.

From w3id.org - Permanent Identifiers for the Web

Topic: 

How Wikipedia's reaction to popularity is causing its decline

Open collaboration systems like Wikipedia need to maintain a pool of volunteer contributors in order to remain relevant. Wikipedia was created through a tremendous number of contributions by millions of contributors. However, recent research has shown that the number of active contributors in Wikipedia has been declining steadily for years, and suggests that a sharp decline in the retention of newcomers is the cause. This paper presents data that show that several changes the Wikipedia community made to manage quality and consistency in the face
of a massive growth in participation have ironically crippled the very growth they were designed to manage. Specifically, the restrictiveness of the encyclopedia’s primary quality control mechanism and the algorithmic tools used to reject contributions are implicated as key causes of decreased newcomer retention. Further, the community’s formal mechanisms for norm articulation are shown to have calcified against changes – especially changes proposed by newer editors.

From [PDF]How Wikipedia’s reaction to popularity is causing its decline

Topic: 

How the Internet changed the way we read

One silver lining is that the technological democratization of social media has effectively deconstructed the one-sided power of the Big Bad Media in general and influential writing in particular, which in theory makes this era freer and more decentralized than ever. One downside to technological democratization is that it hasn’t lead to a thriving marketplace of ideas, but a greater retreat into the Platonic cave of self-identification with the shadow world. We have never needed a safer and quieter place to collect our thoughts from the collective din of couch quarterbacking than we do now, which is why it’s so easy to preemptively categorize the articles we read before we actually read them to save ourselves the heartache and the controversy.

From How the Internet changed the way we read

Topic: 

Wikipedia fails as an encyclopedia, to science’s detriment

Most entries, but not all. Disturbingly, all of the worst entries I have ever read have been in the sciences. Wander off the big ideas in the sciences, and you're likely to run into entries that are excessively technical and provide almost no context, making them effectively incomprehensible.

This failure is a minor problem for Wikipedia, as most of the entries people rely on are fine. But I'd argue that it's a significant problem for science. The problematic entries reinforce the popular impression that science is impossible to understand and isn't for most people—they make science seem elitist. And that's an impression that we as a society really can't afford.

From Editorial: Wikipedia fails as an encyclopedia, to science’s detriment | Ars Technica

Topic: 

Zuckerberg compares free internet services to public libraries and hospitals

Facebook founder Mark Zuckerberg has written a forceful defense of the company's plans to offer limited, free internet access in India, comparing Facebook's Free Basics service with libraries and public hospitals. In an op-ed written for The Times of India, Zuckerberg says that although libraries don't offer every book to read and hospitals can't cure every illness, they still provide a "world of good," suggesting that just because free internet services like Free Basics only offer access to a limited number of sites — which third-parties can apply to join but that Facebook ultimately controls — they're still an essential public service.

From Zuckerberg compares free internet services to public libraries and hospitals | The Verge

Topic: 

Meme Librarian for Tumblr

Amanda Brennan is a librarian for the Internet. Her career in meme librarianism began in graduate school at Rutgers, where she received a master’s in library science.

But instead of heading to a brick-and-mortar library, Brennan continued documenting online phenomena at Know Your Meme and then at Tumblr, where she solidified her profession as information desk for doge, mmm whatcha say and the other viral Internet sensations in need of classification, categorization and preservation.

Here's the meme-ish story from the Washington Post.

The birth of the web | CERN

Tim Berners-Lee, a British scientist at CERN, invented the World Wide Web (WWW) in 1989. The web was originally conceived and developed to meet the demand for automatic information-sharing between scientists in universities and institutes around the world.
The first website at CERN - and in the world - was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer. The website described the basic features of the web; how to access other people's documents and how to set up your own server. The NeXT machine - the original web server - is still at CERN. As part of the project to restore the first website, in 2013 CERN reinstated the world's first website to its original address.
On 30 April 1993 CERN put the World Wide Web software in the public domain. CERN made the next release available with an open licence, as a more sure way to maximise its dissemination. Through these actions, making the software required to run a web server freely available, along with a basic browser and a library of code, the web was allowed to flourish.

From The birth of the web | CERN

Topic: 

Pages

Subscribe to Internet