Skip to main contentdfsdf

Gary Edwards's List: Documents

  • Feb 07, 09

    This is a must see discussion!!!! Especially if you've seen the Ted Nelson series of talks at Google. (Ted Nelson invented Hypertext, and continues to promote the XANDU view of highly graphical and interactive computing based on an advanced "digital" document model). Jen-Hsu fully embraces the sugarplum document model, dissing i a gentle way the legacy of x86 text-number processing designed to replace typewritters and calculators to produce the same printed document.<br><br>

    Nvidia has also announced an ION based board optimized for the Google Android Mobile-Telecommunications OS!

  • Feb 11, 09

    Nvidia chief executive Jen-Hsun Huang is on a mission to get graphics chips into everything from handheld computers to smart phones. He expects, for instance, that low-cost Netbooks will become the norm and that gadgets will need to have battery life lasting for days. Holding up an Ion platform, which couples an Intel low-cost Atom processor with an Nvidia integrated graphics chip set, he said his company is looking to determine "what is the soul of the new PC." With Ion, Huang said he is prepared for the future of the computer industry. But first, he has to deal with Intel. Good interview. See interview with Charlie Rose!

    The Dance of the Sugarplum Documents is about the evolution of the Web document model from a text-typographical/calculation model to one that is visually rich with graphical media streams meshing into traditional text/calc. The thing is, this visual document model is being defined on the edge. The challenge to the traditional desktop document model is coming from the edge, primarily from the WebKit - Chrome - iPhone Community.

    Jen-Hsun argues on Charlie Rose that desktop computers featured processing power and applications designed to automate typewritter (wordprocessing) and calculator (spreadsheet) functions. The x86 CPU design reflects this orientation. He argues that we are now entering the age of visual computing. A GPU is capable of dramatic increases in processing power because the architecture is geared to the volumes of graphical information being processed. Let the CPU do the traditional stuff, and let the GPU race into the future with the visual processing. That a GPU architecture can scale in parallel is an enormous advantage. But Jen-Hsun does not see the need to try to replicate CPU tasks in a GPU. The best way forward in his opinion is to combine the two!!!

  • Aug 30, 11

    Presentation Schedule for OpenDocuemnt PlugFest in Berlin, Germany.

  • Nov 01, 13

    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format.

    My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS.

    Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff.

    My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to.

    The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML).

    The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF.

    Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML.

    Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to launch a format converter while at the same time trying to force a massive end-user upgrade. OpenOffice did very well with their implementation of this process.

    ...........................
    As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything". The authors validate everything the Open Document Foundation was arguing in 2005-2007. The Web is the future. We need Web ready formats; and editors capable of producing them.

    It would be nice to have a first class Outliner in that class.

    • Challenges: Some Ugly Truths

       

      The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition.

       

      Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper.

       

      A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition.

       

      And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5]

       

      But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.

    • Practical Challenges

       

      In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks.

       

      The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly.

       

      Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.

    45 more annotations...

  • Jul 30, 14

    Stats on Cloud usage shows that only 29% of Internet users are using a cloud service. One of the charts provided shows that iCloud (Apple) and DropBox have over 300 million users. Microsoft OneDrive has come out of nowhere to claim the third position with 250 million. And Google Drive finishes in fourth place with 200 million.

    Funny that Google would be so short when gMail and Chrome have proven to be so successful. And gDOCS was a pioneer of cloud based editing of productivity documents. Office 365 has only been available on iOS since May, yet look at the numbers! Incredible.

    Oh, Box is also listed in fifth place with 25 million users.

    I'm starting to think that DropBox, RackSpace and Egnyte are in big trouble. Microsoft is on a huge roll, and my gut instinct is that they have some kind of a deal going with Apple iCloud and Office365. Amazon is surprisingly missing.

1 - 5 of 5
20 items/page
List Comments (0)