Conversations

Welcome to the O.K.I. Conversations Pages

Click on the links in the sidebar to visit a particular topic.  Commenting has been turned on for all of these topics, so feel free to add your thoughts.  You will need to be a registered user to join a conversation.

Redefining Interoperability

Redefining Interoperability
(Or why the IEEE and Oxford English Dictionary have it Wrong)
Jeff Merriman

First of all, before I get into anything else, let me start by offering what I believe to be a highly useful definition of interoperability, one that has prevailed following eight years of work on the O.K.I. project:

Interoperability – “The measure of ease of integration between two systems or software components to achieve a functional goal. A highly interoperable integration is one that can be easily achieved by the individual who requires the result.”

Which, of course begs a definition of integration:

Integration “The act of making two systems work together to achieve a functional goal, regardless of how difficult or expensive that task might be.”

Why offer a definition of Interoperability?  Why not simply go with the prevailing definitions available from the usual sources  Well let’s look at them.

The IEEE defines interoperability as: “the ability of two or more systems or components to exchange information and to use the information that has been exchanged.”

The Oxford English Dictionary similarly defines the word “interoperable” as: “(of computer systems or software) able to exchange and make use of information.”

The IEEE definition  is now over 16 years old, and pertains to a field of study, computer systems design and engineering, that even today is still in a state of relative infancy.  So the hole that I would like to poke in this elderly definition, as well as the similar OED one, is that today, as a field of practice, we generally understand that there is more we would like to see systems and software do together than just exchange information.  We would like to see them also share functionality, and this is becoming a primary goal for much of the work our community is currently embarking upon.

But expanding the IEEE and OED definition to simply include the broader kinds of “making things work together” does not support another aspect of the idea of interoperability that is beginning to prevail: that systems and software should “just work” with each other.  That we would like to bring ourselves closer to the vision of “plug-and-play” for software, systems and information.  This is where making a distinction between “interoperability” and “integration” also becomes important.

As far as the word “integration” is concerned, the current Wikipedea definition is actually pretty good and deserves a thorough read, but in short it talks about the act of “gluing” together components of software systems to achieve some “overarching” functional goal.  It is defined as an act as well as a state, and it is very well aligned with the definition I suggested above.  The added point I make is to emphasize that the act of integration is something that can be achieved through any level of cost and effort that one is willing to invest.

Systems can be integrated though data exchange, over space and time, or through real-time access and manipulation of each other’s functionality.  The particular approach to integration will be driven by a number of things, most important of which being the “goal” that is to be achieved.

Interoperability is really only about making these kinds of integrations as simple and cost effective as technologically possible in pursuit of that goal.

Using this thinking, Microsoft Word(tm) and Apple’s Pages(tm) can be considered integrated in their ability to exchange content using an agreed upon format, RDF, and interoperable to a degree that a non-technical end user can actually achieve this task with a relatively easy set of user gestures.  Of course this kind of interoperability revolved around a very narrow goal of content exchange, and there are many other dimensions of integration and interoperability that Word(tm) and Pages(tm) fail to achieve. 

So as we look at use cases from the perspective of interoperability it is valuable to ask two questions:

1) what is the integration goal that is being described? and
2) to what extent is a highly interoperable approach to achieving that goal of value?

Consumers and Providers:

With respect to interoperability and integration it is usually helpful to categorize software as either a consumer of things or functionality, or a provider (supplier) of things or functionality. A particular software application or system may be both a consumer and provider, but its helpful to keep these "hats" straight when discussing integration, and interoperable approaches to integration. 

In the example of RDF exchange Microsoft Word(tm) is a provider when the user “Saves As” and selects the RDF format.   Pages(tm) is a consumer when its user selects the same RDF file from the file browser using the “File – Open” command

In the case of the kinds of things that the IMS community is hoping to achieve through the Tools Interoperability activity, a VLE/LMS might be a consumer of user functionality provided by a remote provider application, but in turn might be a provider of low level services to be consumed by that same remote application, like exposing its grade-book services.

Goals of Interoperability:

It is also helpful to think about “functional goals”, relating to functional requirements which truly benefit from highly interoperable approaches. Talking about interoperability in general usually does not move the conversation forward effectively. Here are some example interoperability goals with corresponding, admittedly highly generalized, scenarios:

Example Functional Goal: Content Exchange
Scenario: "We have identified a number of content authoring tools that look like they could be potential sources of content for a number of learning management systems."

Example Functional Goal: Modularity
Scenario: “We have identified a number of functional components that we want to insert into a number of larger systems, perhaps learning management systems, and take advantage of some core systems functionality of those larger systems”

Example Functional Goal: Risk Mitigation
Scenario: “We have a number of choices today and into the future of systems or software applications to achieve a particular goal. We want to be able to make a choice today and know that it will be easy and cheap to swap out for something better or more cost effective later on.”

Many to Many, etc:

Note that "a number of…" is a key aspect of the above example scenarios as an indicator for the value to be gained by taking a highly interoperable approach. If there is "only one" imaginable consumer of a thing or functionality AND "only one" imaginable provider of that thing or functionality, then perhaps there is no need for a highly interoperable approach. A one-time integration achieved through a more straight-forward software development techniques (like defining and consuming a locally defined Web Service) may be sufficient.  We often call this approach “binary integration”, because it focuses only on the requirements of integrating two and only two things.

If there is "only one" imaginable provider of a thing or a service, and a number of potential consumers then that provider will likely drive the particular interoperability approach. The same can be said for the only one imaginable consumer and many providers – the consumer will drive the approach. This often leads to an acceptable level of interoperability for the average user.

However, we usually find that over time we can identify "a number of…" on both sides of the equation for a particular goal, and there is usually somebody who finds themselves in the middle wishing for a higher level of interoperability, and this is where standards begin to become important.

For instance: I sometimes use Photoshop for post-processing my photographs. There are a growing number of third-party modules that I can "plug" into Photoshop to edit my photos in creative new ways. This is great for me and I can plug these together myself, without calling a software developer.  To me this has appeared highly interoperable particularly because I can do it myself.  Adobe dictates the standard, but that’s been OK up until now because all I was using was Photoshop.

I have now discovered Apple’s Aperture(tm) and wish those same modules could plug into that application.  I have also found another neat little photo editing application called Pixelmator(tm) . That would make me even happier. As it stands now I would have to call the developers of my favorite modules and/or the developers of Aperture(tm) and Pixelmator(tm) and try to convince them to integrate. Not so easy for me. An interoperability standard in this space begins to look attractive.

Value Proposition, “Precarious Values”:

For software developers and project managers, building highly interoperable software is often harder than not doing so, and usually more expensive.  Doing so involves understanding a standard, engaging with a community, potentially of competitors to develop, refine or profile that standard, and likely have to support a number of additional functionalities or data descriptions that I otherwise wouldn’t worry about. The immediate value to a project of easing future integration is usually not highly regarded.

Open Source projects and communities often assume that ease of integration can be achieved simply through providing source code to the community, regardless of design considerations. Unfortunately this assumes that the person or organization requiring the integration goal will have at hand the software development expertise necessary to achieve it.

Binary integration, using commonly available technologies is often easier for developers and is therefore the most common approach to getting two things to work together. It is significantly more difficult to follow a highly interoperable approach, since the standards/specifications/best practices that are designed to achieve high levels of interoperability present a greater hurdle, and for the early adopter this effort comes with little immediate apparent value. (The term “precarious value” [Klausen, 2002] was identified at the Andrew W. Mellon 2007 RIT/SC Retreat to describe this issue for open software projects)

So other, more global questions include: How can open source projects or commercial products be incented to follow an appropriately interoperable approach when there may be no immediate benefit to the developers or even to project/product stakeholders? The benefit is usually clearer for the next project, or the one after that, or for the eventual end users. How and why do we move from mere integration to interoperability?

Fundamentally, this becomes a user experience issue, as the benefit is ultimately achieved when the user themselves can, though UI gesture, configuration, etc, achieve the results THEY want.   As with any marketplace of activity, it is up to the consumers themselves to drive market change through demand.  It is up to those of us promoting interoperabiltiy to provide compelling examples of the value of interoperability that will help to grow a more discerning consumer community

Transformation

Transformation
Jeff Kahn, Scott Thorne

The world is on the cusp of a major transformation in educational software and content.  A combination of factors are leading to new models and markets. Those who take advantage of these new markets first are likely to have a significant advantage. At the heart of this development are open specifications.

Educational technology is still relatively young and is evolving rapidly. In the early days of audio, you had to by music content and the device to play it on from the same manufacturer.  As that industry evolved, standards created a new market based on the fact that you could buy music in a standard format that could play on equipment from more than one manufacturer. We’re at a similar stage in educational systems and content. In the near future you should be able to use any number of software tools with content from many different sources. Other advances in audio provided a method of plugging components together in a standard way that allowed consumers to buy different vendor’s products knowing that they could be plugged together. This dramatically altered and expanded the audio market. Specifications, such as the Open Knowledge Initiative (O.K.I.) Open Service Interface Definitions (OSIDs) are emerging, which hold the promise of creating standard software plugs.  This has the potential to create a market comprised of a large selection of educational software that are known to plug together and interoperate. This will have huge implications for both consumers and providers of educational tools and content.

If this happens it could lead to a rapid transformation due to the network effect. Once a critical mass of software works this way there will be a customer expectation that everything should just work. This should lead to several other trends as it has in other industries. Once best of breed solutions can be plugged together easily by the consumer, many specialized tools will be developed to tackle specific areas. For example, in the area of content tools we’re starting to see specialized tools for searching, cataloging, or concept maps that work with many content repositories.

Our largest obstacles to progress in establishing this new market is not having a shared vision, which is communicated clearly to the wide spectrum of stakeholders. Many ideas, terms and concepts get confused and lead people to the conclusion that it’s a very complicated picture and a solution will be years away.

One example of this is around the word “open”. There are at least three ways we are using this term:  There is “open source”, which means that everyone has access to the source code and is free to adapt it; there is “open content”, which means content that can be freely shared due to it’s licensing; and there are “open specifications”, which are shared specifications that have evolved through a community effort and are not controlled by one vendor.

We can expect both open source and open content to exist, but not exclusively. Any strategy that works only with these solutions could limit its wider application. Openpromise of creating standard softwa specifications on the other hand, define a meeting place and are needed to bring all the technologies and content together so that open source and vended solutions can work together. Instead of asking questions about how much open source to use, an enterprise would be better off asking what the percentage of packaged versus custom software was in their organization. Only through the further adoption of good open specifications can we actually have more packaged software that works together, whether it is vended or open.

Another of the principal obstacles is the general trend to look for a silver bullet; one technology or idea that would solve the interoperability problem. In reality this is such a complex area that many things will have to contribute to the solution. For example, to use my hairdryer in another country, I need both the electrical characteristics and the plug shape agreed. So in the educational technology; service interface definitions, data structures and communication protocols will all be used together to achieve the end goal of interoperability

Therefore it is important to continue work in all areas of specification development and adoption. In particular there are many data structures that particular domains might develop structures that accelerate the automation and process of producing content for education to allow for interoperability at an even greater level. For example, getting agreement on vocabularies, schemas and ontologies in particular communities is a particularly helpful step towards interoperability. Some goals for the coming period might be those that accelerate the market evolution
based on open specifications.  For example:

• Fund projects that have multiple partners, ideally both open source and commercial, and that interoperate only through the use of open specifications to perform some worthy educational goal.

• Create communities in particular domains to advance common ideas, requirements, and profiles of specifications.

• Support and promote all varieties of open specifications

O.K.I. in the Enterprise

download
Download
(206.04 kB)
“OKI in the Enterprise.pdf”
The Case for OKI in the Enterprise
Realizing a Service Oriented Architecture
Scott Thorne

Most Enterprises today are faced with an array of IT challenges. They need to maintain an existing technology base while absorbing a wide-array of new technologies. The costs of integration are high. The complexity and number of systems is growing. Technology continues to change at an increasing pace. Duplicate efforts result in pockets of overlapping functionality. Many of these issues are related to integration; making applications work with IT infrastructures and making system components work with each other. This is a hard problem that we%u2019ve struggled with for years, and so we%u2019re hoping that Service Oriented Architecture (SOA) will solve it all.

This document explores the benefits of SOA and introduces the value of O.K.I. Open Service
Interface Definitions…

AuthZ Service Benefits

Authorization Service Benefits
Scott Thorne
 

Authorization is needed in most applications, but just because it’s commonly used doesn’t mean it has to be a service. The actual function of authorization can be achieved with or without implementing it as a service. So before embarking on a project to create or use an authorization service, you need to have a clear idea of the potential benefits. So what are some of the unique advantages of defining and using authorization as a service? The overall goal is improved authorization management, that leads to having the right authorizations in place and enforced. Some of the improvements that a service offers are: 

• Authorization rules can be reused in multiple applications
• A common authorization user interface can be created
• Enables authorization maintenance to be distributed
• Enables a centralized business process to be created
• Enables the substitution of the authorization mechanism 
 
Sometimes the advantages of a service are only gained over the long-term, and are not immediately apparent. If you are not interested in these benefits, then the extra work of isolating the authorization function as a service might not be worthwhile. Having the benefits clearly in mind will help drive the authorization design.

The main benefits of using Authorization as a service are centered around integration. If two systems handle the same set of resources, then there may be authorization rules that they could share. For example, if there is a financial system, and a separate financial reporting system such as a Warehouse, they both might need to know who has access to what information. If each system were to maintain separate authorizations, they not only duplicate work, but risk being out of sync. Managing common authorizations in the same place avoids this problem.

A well-designed authorization service makes it possible to distribute authorization control to people who know the resources and people involved. This enables the authorization management to be put in the hands of the person responsible for the resources; the person who should be making the authorization decision. This greatly increases the likelihood that appropriate authorizations are in place. Without being able to distribute authorization management; authorization requests often go through a chain of people or to a central place, where the chances of miscommunication increase. In this situation, the people entering the information don’t really know if the rule makes sense and aren’t in the position to catch inadvertent errors.  
 
Having a common authorization service allows authorizations from different applications or areas, to be displayed and maintained with the same tool. Having a common approach for authorizations makes it easier to train users. Once they understand how authorizations work in one area, such as finances, then it is much easier to understand a similar model in HR. It still is possible to have authorization maintenance functionality built into specific applications using a service, but it also creates the option of having a dedicated application for authorization maintenance.

Having a common authorization service also enables other centralized processes to be managed. For example, when an employee comes on board or is terminated, there is one place to go to adjust authorization rules. In addition, audit procedures are easier to implement in the area of authorization management.

These benefits go beyond ordinary authorization functionality, but are examples of where a service is particularly effective. Having a clearly articulated a set of goals for an authorization service is the first step towards implementation. 
 

O.K.I. as Strategy

download
Download
(107.57 kB)
“OKI As Strategy 3.pdf”
O.K.I. as Strategy
Catherine Iannuzzzo

Service-Oriented Architecture (SOA) is an important emphasis of modern computing, because it contributes to greater re-use of one%u2019s software investments, enhances interoperability, decouples functionality for greater testability, maintainability, and flexibility, and can make application development quicker easier. However, none of these outcomes are automatic simply by declaring that one has or uses services. To be effective, services should be part of a consistent architectural approach and offer a useful framework for applications. The OKI open specification offers an excellent basis for a services architecture…

Architectural Approaches

Where do we need choice in eLearning?
Scott Thorne

Introduction
 
“What standards do we need in e-learning?”  is a commonly debated topic. It should not be debated however without asking the opposite question; “Where do we need choice in e-learning?” Only through addressing both these questions, where we need standards and where we need choice, can we achieve desired results. 
 
Creating specifications for the emerging e-learning environment is complex. The complexity arises from several factors.
 
• Information technology is changing
• How we use technology in education is changing
• There are many stakeholders
• Educational software needs to integrate into wider IT environment
 
Goal & Vision
 
The ultimate goal is to create a learning technology environment that allows for tight integration as well as diversity. This requires carefully balancing the goals of interoperability and standardization with the competing goals of innovation and diversity. 
 
An Analogy
 
The field of law provides one analogy where these abstract goals are successfully balanced.  The US Constitution provides a broad set of laws, which apply to the whole country. It allows individual states to define additional laws that apply statewide. In turn, state law allows for municipalities to create additional local laws. The Constitution is by design a efficient and effective document. It only specifies what it needs to, and leaves open areas for variation. However, it does force states to agree with it and not pass conflicting laws. This creates a system where there is broad agreement over a limited set of things, but also locally defined laws. These local laws in different regions can be different and conflicting, as long as they adhere to the constitution. If the broader overarching rule set (Constitution) is created first, then the environment for creating local laws is clear. All that is required to create local rules is to assure that they are not in conflict with the Constitution. 
 
What’s most impressive about the U.S. Constitution is what it doesn’t say. If its framers had tried to put in more detail, it would have inevitably have had to change more often. Instead it remains a remarkably stable document.
 
Approaches
 
There are many approaches that work in solving complex problems. Sometimes a top down directed approach works well, other times an experimental bottom up approach is better. Sometimes utilizing both is required. In a problem space as large as eLearning the most efficient way to tackle the problem is not to dictate a particular approach, but to put all the resources and methods to bear in a coordinated way.
 
Another example of a complex system of law can be found in the European Union.  In the EU countries already had well-established laws, and much later a common set of rules were created. In this case, the problem was how to create overarching rules that had minimal conflict with the local rules already created. This approach, building practice over time and harmonizing afterwards, can lead to the same layered result. Which approach is more appropriate and efficient is an open question. 
 
Top down approach
 
Thinking through all of the issues of a system ahead of time would appear to be a faster approach, but doing so for a field as large and complex as eLearning would be impossible. However, the top-down approach adds value when it doesn’t dictate all the details, but puts some broad agreements in place. 
 
Gaining broad conceptual agreement, and subsequently refining it, is the fastest and surest way to success in areas as complex and with as diverse opinion as Law and eLearning. This winnowing process happens all the time in other contexts. The practice of parking issues when disagreements arise so that an agreement can ultimately be reached is a common practice. If something is preventing agreement it is set aside, to be dealt with later.
 
The critical factor is that when an agreement is reached it must be concrete. It should be documented in a way that is specific and unambiguous. It should state only what has been agreed to, and no more, as simply clearly as possible. 
 
Time is saved by concentrating on documenting what is common and document that, rather than listing all the differences. When an irresolvable difference is identified, it is set aside, to be handled in a subsequent phase. After an agreement is reached and documented the next phase can start. The discussion can break into Groups with similar points of view around “parked” issues can break into separate discussions to see if a more specific agreement can be reached. Each of these groups will use the same process: Agree on what’s common, document those commonalities, and, if needed, break into even smaller groups for separate more specific agreements. The trail of agreements that result will range from the general loose ones needed over a wide area to successive refinements for market segments, enterprises or other groups.
 
Degrees of Interoperability 
 
Interoperability is not binary. If it were, then there might be a single standard that everyone used. In a dynamic and diverse field such as e-learning this would be a mistake.  Instead there could be degrees of interoperability, which enables things to be interoperable in differing degrees.
 
Interoperability may start in pockets. An enterprise may achieve it among their systems. A market segment may achieve interoperability around a certain process, such as booking airline tickets. Although we would like to think that in future everything is interoperable, in practice it’s not so black and white. 
 
Interoperability is best viewed from a local perspective. As a user, you want all the things that you use regularly to be tightly integrated things infrequently done have less of a need to be interoperable. Therefore, there are gradations of interoperability; small pockets of tight interoperability as well as broad loose interoperability. This means, we can never enforce a universal solution, but must instead pursue a strategy of enabling varying degrees of interoperability.
 
Therefore, we will need both broad general specifications, which promote broad interoperability goals, as well as more localized constrained agreements to get tight integration in a specific community. This is similar to the scope of law example. Several sets of specifications need to be created, which need to align and cover differing scope. If we take the top down approach, i.e. create general rules first and then the variants, how might the process work?
 
Since we don’t know at the start how many layers of specifications we need, (federal, state, county, or just federal and city), how many iterations it might take to get from broad general agreement to the specific agreement needed for tight integration in a specific domain is unknown. It depends on the complexity of the domain and the variety of opinions.
 
It is a matter of finding the “sweet spot”, where there is an agreement, which is neither too general, nor too specific. There is a point where moving to be more specific; either makes the problem much more complicated or makes it less universal. At this same point, if you made the specification more abstract or general, you would not gain more applicability, such as acceptance in a wider community. If you’re in this general area, then you might be at a “sweet spot”. It is important to find a way to document the agreement reached at this “sweet spot” rather than trying to push further to a level, which makes it easier to document.  This is an important concept, since it is the problem domain, which determines where these “sweet spots” are located, and therefore the forms of documentation of agreement might not be universal among domains. 
 
How do we document the various forms of agreement?
 
There isn’t one answer to this question. Precisely documenting various forms of agreement is a tricky thing to do in practice for a couple of reasons. Most people find it’s easier to produce a precise specification, than to create a general one. That’s because there are not universally accepted ways of documenting conceptual agreement. For some specific things, such as data structures, there are tools which can easily document the agreement, but very abstract, conceptual notions are harder to document. Additionally, everyone can see the value of the specific thing, but don’t see the value of the general one. For example, a group might find it easier to agree on defining a data structure using XML Schema, then just producing a conceptual entity relationship model. However, it would be better is they just stuck to the latter: getting the general agreement first, before taking the next step of a more specific and precise binding. The value comes from gaining consensus, and creating better specific specifications in the future. Agreeing on the entities, their definitions, and their relationships, before specifying each individual field, might be a useful place to reach. Then a fallback position is documented so that in case things bog down, you are not back to square one.
 
Conversely, if we constrain ourselves to only documenting agreements a certain way, for example, as XML schemas, then only certain types of agreements that can be documented. It would be better to look for alternative forms of documenting agreements than forcing them to match the tools we have. Documenting assumptions, definitions, and conceptual data models may be more appropriate in certain circumstances.
 
There is a certain engineering and technical mindset, which equates more detail with better. Here every detail must be known and getting a complete solution is the goal. This runs contrary to the method described, and needs to be explicitly guarded against.
 
Another point to raise here is that “testing” for general agreement is a low cost step. If one assumes that everyone agrees about certain things, it is easy to test and document this. The only reason it might be hard, would be in the case where there was not the expected agreement. In this case it is still more efficient to find out sooner, than after work has proceeded under the wrong set of assumptions.
 
There are some questions that could be asked which might only expose diversity. For example, asking if a certain action always happens atomically or in batch is bound to yield both answers. This has the tendency to exponentially complicate the agreement. Concentrating on just what is done rather than how helps avoid these complications.
 
Conclusion
 
The overall point is that we need to continue to ask questions and test assumptions as we proceed, and resist the temptation to latch onto a particular mechanism even if it is at hand and easily agreed, until we’ve ascertained that it brings us closer to our real objective. We still need to get to specific bindings in order to achieve interoperability, but we might not want to do this as a single step. Agreements about concepts, data structures, service definitions and protocols are all needed to reach the ultimate goal, but taking measured steps to get there might be the quickest way.

Topical Blog List



Comments are closed.