Sunday, November 15, 2009

When all you have is the reaction...

I'm one of those people who has about 9 different email accounts at any given time, one of which is gmail. Yesterday I discovered about 50 legitimate emails had been diverted into my spambox this week, for no apparent reason. I'm guessing that there was some kind of spam-bomb that went off and pushed their filters way up the scale for a day or two. Up until this point it's been one of the most reliable and efficient spam filter's I've used.

As a result I've been thinking about Google a lot the past few days. There's the new Verizon Motorola Android phone on the market. There's the Google Books settlement that just came out:

http://news.cnet.com/8301-1023_3-10397787-93.html?part=rss&subj=news&tag=2547-1_3-0-20

And an interview with their CEO talking about Google's successes and challenges:

http://news.cnet.com/8301-30966_3-10396865-262.html?tag=rtcol;inTheNewsNow

I think the most interesting aspect of this interview is the problem of scale that they are up against. I've been noticing recently that the quality of searches have started to suffer, as they struggle (or fail) to keep up with the new media and blogosphere.

Once again the scale and structure of the internet is changing, and they may have to revisit a number of the assumptions that they build their search engine upon. Which leads me to the question of "how do you plan for fluidity?" How do you plan for a system that is dynamically alive and changing at a pace that only seems to accelerate?

I don't know if anyone has a really good answer for these questions, but I do think the world of computers has some solutions in the works, namely, what I like to call abstraction.

When I was learning to write a simple web application in PHP last spring, the first thing I did was write a bunch of 'classes' that would define objects. Those classes called the database and fetched the data, handling it in the terms defined. All of the PHP I wrote after that called on the classes, calling these 'objects' rather than calling directly to the database.

The beauty and elegance of this is that (in theory) you can change the structure of the database radically, and you only need to change the affected classes. The rest of the code can run, virtually unchanged, on those modified classes. I like to think of it as a type of data abstraction, where the code calls the database via an abstracted, a mediated channel, rather than the database itself.

This is, however, not unique to PHP and web applications. As I understand it, all object oriented programming languages function similarly, defining and calling objects, whether the definition is called a class or a library.

This is, again, not unique to object oriented programming. All of the Linux machines I've used rely on what's called a 'hardware abstraction layer'. Basically the operating system calls this abstraction layer and the abstraction layer acts to communicate with the hardware. One of the big problems with talking to the hardware directly is that if it fails to respond, the system hangs, freezes or crashes. So rather than writing a specific response to every type of possible failure, they rely on the hardware abstraction layer (HAL) which will communicate back to the operating system if the hardware fails to respond or perform normally. Furthermore, if the hardware changes, the operating system doesn't have to change the way it calls the hardware. In my opinion this is one of the main reasons that Linux has made it it out of the geek pit and into the playing field. Prior to this Linux was synonymous with 'hardware configuration nightmares'.

Which leads us back to the question, are there new ways that we can abstract our library functions, our information systems, such that the structures can change without having to create whole new systems? How will we afford flexibility even in our abstractions such that they too can change?

Saturday, November 7, 2009

Standing in the Temple of Interoperability and Extensibility

My research group likes to share links of interesting articles. This week the basket included:

Think Tank Stresses Importance of Information Sharing in Research and Teaching

Tim Berners-Lee: Machine-readable Web still a ways off

All of the links this week really get at the heart of a widespread need for interoperability, extensibility, and some standards for machine readable contextuality.

I find it both really fascinating and totally counterintertuitive that standards (when used properly) promote the creative (and unpredictable) expansion of the net, as they allow interoperability, sharing, and reduce the duplication of effort. I don't think that most people understand the degree to which they provide the substrate for the network to communicate. Because the network is so distributed and largely uncontrolled, it provides the power to aggregate across systems. That is, however, the problem as well.

However, in the history of solutions in this arena, from html to packet switching to email, you see a pattern of innovation that is both simple, efficient and elegant. The need for semantic structure in the web is building, across institutions and across disciplines, and I'm willing to bet that we'll see a solution start to take hold in the next few years. The pressure is building, and the dam will break. As such, I would argue that we are in one of those important historical moments where change is about to form itself before our eyes.

Since I've had too much coffee and not enough procrastination, I'd like to indulge in theorizing about what innovation matrix may come out to solve this problem. I say matrix because it's such a complex systems and sociological problem. I don't think any one technology or innovation is going to get us from here to wholesale adoption of something like the RDF.

There are, however, some assumptions to predicting evolution. We have to assume that the nature of the system won't radically change, and that the assumptions that have given rise to the current system are not fundamentally flawed.

That being said, there are regularities to technological change. The innovation has to be simple enough to be distributed and elegant enough to be adopted. It has to be something that's obvious from the technology in hindsight. I see the following principles implicated:

  1. It has to be a simple step from the current technology to the innovation. For example, twitter is an obvious innovation if you look at blogging culture, sms culture and and the adoption of mobile web applications into everyday life. It's an extensible platform that can be run on the most basic of web enabled phones, allowing blogging impulses to happen away from the desk and without the strange and tiered costs of sms.
  2. It has to be based on obvious and available reconfiguration of current technologies, but remixed in a way that isn't inherently obvious, or it would have been done already. In other words, it needs to take advantage of the current strengths of the system. It's not going to be something built in a free standing vacuum to address this problem.
  3. There needs to be either a low barrier or a high incentive for adoption. In other words, it has to be either easy or rewarding, which is a complex sociological calculation in itself. Regardless of the complexity of estimating this variable, there is a strong underlying argument for ease and simplicity within this factor. Basically, it needs to be cheap to adopt.
  4. There is also a strong argument for a multifaceted rewards system. In other words, it needs to maximize the number of groups and possible ways that it can benefit the system. If it only benefits a small group, it probably won't be adopted, regardless of the size of the benefit. In order to get a large enough net benefit, we might need to aim for a solution that yields a smaller benefit to a larger number of users.
  5. It needs to be injected into a place where it can take root in a sizeable portion of the population of online documents. It doesn't matter how simple, elegant or functional it is if no one is using it. One word for this problem: Betamax. Size matters when it comes to adoption and market share. The most successful path for the retroactive injection of structure system is probably at the lowest attainable level.
  6. Centralization is tenuous in an online environment, the only examples that work are ones that are highly open and interoperable such that the content can move fluidly between systems, or where demand is high.
  7. There is a strong generalized need for semantic structure for the web, but virtually no focused demand, because of a positivistic myopia. The deficiency of the system isn't immediately visible even the most proficient users. They don't know what the searches aren't returning to them. We are trained to optimize our thinking within the current structure. It's a nebulous problem without clear solutions or villains. That's what makes it an interesting problem to watch, because it means that the crystalization of the solution will be somewhat unpredictable, but when it does crystalize it will precipitate a whole new iteration.
  8. The solution must be extensible. The system is a moving target, and must either grow with or shed existing structures. Any successful solution must provide for both the present and the future. We must be able to add in things to it in the future that we cannot conceive of now, because of the nature of system.

Because of these factores, I don't think a centralized repository of metadata (semantic) information is feasible. The distributed, layperson-oriented and uncontrolled nature of the internet precludes a number of avenues of innovation.

However, I think the solutions will look mundane and yet powerful. Below are som that might pass through most of these limiting factors.

1. Embedding more semantic/metadata information within the hyperlinking html definition.

Why this would work:
It wouldn't be final solution. It doesn't provide the extensive benefits that the RDF would, but could bridge the problem significantly with distributed labor, low cost, low overhead, integration into the structure of the internet and distributed benefits.

It would only begin to address the problems of non-html documents on the web. In the end, the metadata really needs to be emdedded in the document such that if you have the document you have the metadata.


2. Creating a standardized file wrapper that embeds the metadata. Just like the embedded metadata of mp3s, and the wrapping of digital video files to encompass multiple encoding types within a single file type. These two solutions could be extended to the entire world of online files.

This would require a bunch of details to be successful:
  • The standards would need a library of definitions hosted centrally, just like for document type definitions within html/xhtml. This is so that they can be extensible, grow over time, and embed only the minimal amount of information in the file itself.
  • The metadata should also include document versioning information. There is such a need for embedded versioning information that this alone could drive adoption rates. Imagine if every document you emailed for review could have an edit time date/stamps with every user included, regardless of which platform you or they used. The technology is simple and at our fingertips, think of it as email headers for web files. Semantic file management problems are not restricted to the internet. We need the next generation of embedded file information in a global way.
  • The wrapper would need to be based on a simple open architecture, such as xml.
  • The wrapper would require broad spectrum buy in from the software and gadget community. It would need to be designed by and supported by software manufacturers such that it could be automatically generated when saving the file. If Microsoft and Adobe supported this endeavour, I believe that virtually all filemakers would follow suit. It's not inconceivable to think that they would do this, and do this openly, given the history of the development of the pdf and the ms office document types. Google, microsoft, and yahoo all have a great deal to gain having a voice at the table with regards to increased semantic structure of documents.
  • Support would need to be such that users could save, and modify the wrapper information on a variety of platforms and applications, preferably on the fly without actually opening the document. Browers could be adapted such that the wrapper information is optionally editable upon download of a file, (just like the filename and download location). Browsers could then also take contextual information from the website to pre-populate fields. Just as citation management software can recognize citations, browser and application plugins could be written to read and populate metadata fields on the fly while saving documents.
  • There could be scripts written to automate the wrapping of current document repositories on a large scale. Arguments could be made for publishers to do this so that their files can be tracked for copyright purposes.
This proposition is certainly more ambitious, but then again the idea of the semantic web is highly ambitious. I would also argue that it is far more attainable/negotiable than the direct implementation of the RDF (because so few people can do anything in xml). Furthermore, it seems like this would support the implementation of the RDF, I believe, as it could be one of the metadata types supported, complementing everything that has been developed so far.

And I'm sure this hypothesizing shows my gaps of knowledge as much as anything. But you have to admit that it's an intriguing issue to think about, regardless of how things fall out. And while this may not be the solution that takes hold, it is likely that it will be something in this vein.