Skip to main content

Paul Gillin

Paul Gillin's Public Library

  • tudies reveal that business buyers spend 21% of the buying cycle conversing with salespeople, while spending only 23% of the time conversing with colleagues and peers. Surprisingly, 56% of their buying cycle time is spent on searching and engaging with content.
  • 90% of marketers are not trained in marketing performance and marketing ROI. And surprisingly, 80% struggle to demonstrate to their top management the business effectiveness of the spending, marketing campaigns and activities

  • One of the biggest mistakes that IT organizations make is looking at “the cloud” as a destination instead of an operational model
  • Customer and Community focused, or if the application is Internally Focused.
  • reate a new business opportunity or optimize an existing process.

11 more annotations...

  • Wikibon believes Spark will be a crucial catalyst to driving the inflection points for each of Wikibon’s three big data application patterns. In 2016, Spark-based investments will capture 6% of total big data spending, growing to 37% by 2022 because:
  • Switching metaphors, if an entire program is like a spreadsheet, each cell has to write to disk in order to pass its results to other cells
  • Simplicity through unification is progressively replacing the mix-and-match flexibility and complexity of specialized engines that grew up in Hadoop to compensate for MapReduce’s shortcomings.

5 more annotations...

  • Waxman said that by 2025, Intel expects that 70 percent to 80 percent of all servers shipped will be deployed in large scale datacenters,
  • The point is that it will not be long before the next several hundred cloud service providers provide as much revenue as the hyperscalers at the top of the food chain. Like maybe in two years, perhaps three. And it will not be long until the next 10,000 drive even more sales, and so on until everyone who wants to build a cloud does and those that don’t run their applications on someone else’s cloud.

  • A data lake is a set of unstructured information that you assemble for analysis.
  • Data lakes, most commonly evaluated with the Apache Hadoop open-source file system, aim to make that process simple and affordable. Thus, your business can unlock and exploit previously random information.
  • There are four parts of a data lake: unstructured data sources, storage where the information resides, the file system, and people/tools to analyze it. You'll need all four parts to turn your lake into cleanly bottled water.

1 more annotation...

  • t's increasingly apparent that for many, it's no longer an issue of SQL vs. NoSQL. Instead, it's SQL and NoSQL, with both having their own clear places—and increasingly being integrated into each other. Microsoft, Oracle, and Teradata, for example, are now all selling some form of Hadoop integration to connect SQL-based analysis to the world of unstructured big data.
1 - 20 of 7112 Next › Last »
20 items/page

Highlighter, Sticky notes, Tagging, Groups and Network: integrated suite dramatically boosting research productivity. Learn more »

Join Diigo