Skip to main content

Melissa Selby

Items from 9 people Melissa Selby follows

Todd Suomela
  • Over the past few years, we’ve learned a lot about how companies watch their workers, measuring their performance, their attitudes, whether they’re looking for other jobs. Now new research is putting the watchers under the microscope, too.

    Specifically, how good they are at hiring people. Human resources professionals aren’t immune from the pressures of automation — sophisticated testing increasingly has pushed old-fashioned intuition to the side when it comes to recruiting. And a study released this month shows that traditional hiring managers can really screw things up.

    “It definitely suggests that more decision making powers should be given to the machine relative to the humans,” says University of Toronto professor Mitchell Hoffman, one of the report's authors.

Todd Suomela
  • It turns out that even the millennials who fight wars don't want to hear bigoted jokes.
  • The complaint that kids these days are too soft is an eternal one. A letter to Town and Country magazine in 1771, noted by Mental Floss, complains, “a race of effeminate, self-admiring, emaciated fribbles can never have descended in a direct line from the heroes of Potiers and Agincourt ...” So forget false nostalgia for the days when the youth were stronger and tougher. For institutions built of a foundation of a constant churn of teenagers, managing their relationships is just a matter of practicality.
Todd Suomela
  • First, the reason the Internet works as well as it does is that it is a best-efforts network. There are no guarantees at the network level, and thus there are no guarantees further up the stack. You get what you get and you don't get upset. The Web is not reliable, so even if you perfectly understood the collection policy that caused the crawler to request a page, you do not know the many possible reasons the page may not be in the archive. The network may have failed or been congested, the Web server may have been down, or overloaded, or buggy, or now have a robots.txt exclusion. The crawler may have been buggy. I could go on.
     You can't deduce anything from the absence of a page in the archive. So you cannot treat the samples of the Web in a Web archive as though they were the result of a consistently enforced collection policy. This should not be a surprise. Paper archives are the result of unreliable human-operated collection policies both before and during accessioning, so you cannot deduce anything from the absence of a document. You don't know whether it never existed, or existed but didn't make it across the archive's transom, or was discarded during the acquisition process.
  • Second, the Web is a moving target. Web archives are in a constant struggle to keep up with its evolution, in order that they maintain the ability to collect and preserve anything at all. I've been harping on this since at least 2012. Given the way the Web is evolving, the important question is not how well researchers can use archives to study the evolution of the Web, but whether the evolution of the Web will make it impossible to archive in the first place. And one of the biggest undocumented biases in Web archives is the result of Web archiving technology lagging behind the evolution of the Web. It is the absence of sites that use bleeding-edge technology in ways that defeat the crawlers. You get what you can get and you don't get upset.
  • Third, the fact is that metadata costs money. It costs money to generate, it costs money to document, it costs money to store. Web archives, and the Internet Archive in particular, are not adequately funded for the immense scale of their task, as I pointed out in The Half-Empty Archive. So better metadata means less data. It is all very well for researchers to lay down the law about the kind of metadata that is "absolutely imperative", "a necessity" or "more and more imperative" but unless they are prepared to foot the bill for generating, documenting and storing this metadata, they get what they get and they don't get upset.
Todd Suomela
  • News apps aren’t being preserved because they are software, and software preservation is a specialized, idiosyncratic pursuit that requires more money and more specialized labor than is available at media organizations today. But, you might argue, it ought to be easy to preserve stories that are not software, right? A story like LaFrance’s, which is composed of text and images and a few hyperlinks to outside sources, ought to be simpler to save?

    You’d think so. But not necessarily.

Show more items

Diigo is about better ways to research, share and collaborate on information. Learn more »

Join Diigo