Monday, December 08, 2008

Social Networks and Strong(er) Auth

I've been thinking about strong(er) authentication mechanisms recently and the slow uptake by the mass market. I was registering for an online brokerage account recently and was required to enter an email address. I thought through the many email addresses that I use and decided to use the one that has a strong auth mechanism attached.

One of the reasons for this decision is that my ever expanding Facebook world has a lot of information about me that might or might not be relevant to a "password reset attack". I recently found a bunch of childhood friends on Facebook and that has been wonderful. However, it also means that all the information about elementary schools attended, childhood friends, etc is exposed to all my other friends on Facebook. From an information perspective, I don't have any problems, but it does concern me from a security perspective.

Rather than think through what information is available on Facebook, and whether any of that information was used with the "Security Questions" for the email account, I chose to pick an email address that can only be accessed via a 2nd factor authentication mechanism.

So, my question/thought is, "Could social networks be the forcing function that drives consumer adoption of strong(er) auth technologies?"

Wednesday, December 03, 2008

Is it really aggregation vs federation?

In a post this past Sunday Om Malik suggested that user's want aggregation not federation. While I totally agree that user's want aggregation (e.g. having all their relevant information in one place) I don't believe aggregation is in conflict with federation. Rather the two concepts are orthogonal.

I associate aggregation with API access to my data distributed across the web. The exception is closed networks like Facebook that provide all the services within a walled garden environment. So for aggregation to work in the "open web", it must be able to access my data whereever I've chosen to place it. This requires explicit user consent (ala OAuth) for the aggregator to access my personal data at different services.

Now in order for me to grant consent, and for the aggregator to be able to access my personal information, I need to authenticate to the service provider of my data. This authentication step is simplified by using federation (e.g. an OpenID valid at all my different service providers).

So federation really enables a safer, more secure, aggregation capability for users.

Thursday, November 20, 2008

OAuth and SREG and MapQuest! Oh My!

This has been a great week for AOL and our efforts to support the "Open Stack". While our progress is not as fast as those more nimble and lightfooted, I still believe the progress is significant.

Yesterday, the AOL Mail Gadget for iGoogle was announced. This gadget uses the OAuth capabilities of the iGoogle container to access OAuth based AOL Mail web service APIs.

Also yesterday, AOL announced it's preview support for the SREG 1.0 extension to OpenID. As in my message to the OpenID general mailing list, there are still a number of user experience issues that need to be resolved around SREG/AX support and I hope that our initial implementation will help consolidate the necessary industry best practices.

Finally, today MapQuest launched a new feature called My Mapquest which allows users to store addresses, driving directions, phone numbers for "Send to cell", and even the ability to estimate fuel costs for a trip based on your personal vehicle. My favorite part of this new capability is that anyone can use it because it supports OpenID. I believe this is the first web site from a major provider, that isn't a blogging product, to support OpenID as a relying party. (Feel free to correct me in the comments).

Fall Kaleidoscope


Fall Kaleidoscope, originally uploaded by GFletch.

The leaves are nearly all gone where I live, but here is a reminder from just a couple weeks ago of the glory of fall.

Tuesday, November 11, 2008

Important topics at IIW2008b

If I've got my timing right, this entry will post about the same time as the schedule for IIW is being created by those attending. Unfortunately, due to circumstances beyond my controll, I'm not able to attend in person. But that doesn't mean that I'm not actively following the discussions as best I can remotely:)

Here are some key issues that I'm hopeful the community will be able to address during the next two days.
  1. User eXperience (UX) for Relying Parties (RP). This is a critical element of making OpenID understandable and valuable to the "masses". There has been quite a bit of work on this recently and I'm excited to see what will develop from the face to face meetings on this topic.
  2. XRDS and Discovery. This is really important for the "open stack" and deals with the concept of describing meta-data for resources. As it relates to "personal service discovery", the meta-data is about a user's OpenID and points to the related services of that identifier. This is crucial for connecting services to a user and allowing the kinds of "dynamic discovery" the make the "open stack" work.
  3. OpenID TX Extension. This extension being proposed into an OpenID working group is about adding a layer of trust to OpenID transactions. Right now it focuses on tying transactions to contracts between parties but hopefully the working group will extend this to adding a "trust fabric" to OpenID.
  4. Email as an OpenID identifer (or as a pointer to an OpenID). This is part of the UX discussion in that many/most? people don't know they have an OpenID but they do know their email address. With Microsoft and Google supporting OpenID (at least in beta) most people have an OpenID. So this discussion is about how to leverage this with users to increase the adoption of OpenID.
  5. Email verification. This is slightly related to #4 but also different. In the SREG and AX models, an RP can request an email address but it doesn't know whether the OP has verified that email or not. In some cases, if the RP is talking to an OP that supports email and the returned email address is "managed" by the same company as the OP, then the RP might consider that email address verified. However, Yahoo! and others are looking to support an OpenID extension that would allow an RP to directly verify an email address with the email provider (providing the email provider is also an OpenID Provider; or at least supports this extension).
  6. OpenID + OAuth "Extension". This topic is addressing how to allow a Consumer to both authenticate a user and get an OAuth access token and secret in a way that the user only has to authenticate and authorize once. There are a number of significant issues with this effort especially if the extension tackles allowing one SP/OP to verify/validate another SP/OP's tokens. Right now, this effort is focusing on allowing the OP to present not only authentication but also authorization UI so the flow is simplified for the user.
  7. OAuth Extensions...
  8. OAuth support for Mobile/Desktops/Appliances/etc. This topic deals with a simple mechanism for mobile apps or appliances to participate in the OAuth flow even if the device doesn't have browser support and very limited input capabilities.


Friday, October 03, 2008

Subscribing to Activity streams

Yesterday Paul asked a good question about subscriptions and identifiers in this push model for activity. If we take an explicit use case regarding how Paul subscribes to George's activity feed, the key is that Paul has to have at least one identifier for George that can be used to discover his activity service.

Leveraging XRDS, OpenID, OAuth and Portable Contacts this should be doable. Here is a graphic and flow.



  1. Paul logs into his SocialStream collector (with his OpenID)
  2. The SocialStream collector discovers Paul's PortableContacts service (via XRDS)
  3. Paul authorizes his SocialSteam collector to access his's PortableContacts service (via OAuth)
  4. The SocialStream collector asks Paul if he wants to subscribe to any of his contact's activity feeds (retrieved from the Portable Contacts service)
  5. Paul selects his friend George
  6. The SocialStream collector uses the identifier(s) for George to discover George's activity service (via XRDS discovery)
  7. The SocialStream collector subscribes to George's activity service
    • If subscribing to a public feed, no other information is needed
    • If subscribing to a protected feed, then OAuth can be used to determine if Paul is allowed access to the feed
    • Membership determination can leverage Portable Contacts tags as described here


Thursday, October 02, 2008

MyActivity vs MySocialStream

The recent production announcement by Gnip, Inc got me thinking about activity and the push vs poll model for tracking activity. As I see it, there are two kinds of activity that I want to track: (1) my activity and (2) my "friends" activity. Hence the terms MyActivity and MySocialStream.

In a truly open and distributed web environment, I should be able to separate these functions into any providers I desire. Right now, most social networks combine both my activity and my friends activity into a single stream and service. Thus if this service wants to aggregate activity, it has to poll each of those external services for updates and then merge them into the activity stream.

An interesting aspect of Gnip, Inc's offering is that it will push updates, even filtered updated, to an endpoint. Push seems like a better model, especially if it can be "throttled" in some way (meaning, the service pushing the updates can be configured to combine multiple updates in a 5 min period into a single push event).

If we model my activity separately from my friends activity, then it should be possible to define an API for an activity service. This activity service would accept my activity events from across the web. This activity service would be discoverable via my XRDS document so that any site I visit can discovery my activity service and report activity to it. If my friends want to track my activity, they can subscribe to my activity service. This subscription mechanism could include the use of OAuth for subscription to restricted activities. Managing who is allowed to see which events could leverage Portable Contacts for group membership. This way, whenever I'm involved in some activity across the web, that activity will get reported to my activity service which in turn will push out the event to all friends that have subscribed.



As for all my friends and their activity that I would like to see, I just need to have MySocialStream service subscribe to their activity services. This data aggregation could be displayed in any venue; opensocial gadget, web page, iPhone app, etc.

All that's needed is the specifications for the APIs and the ability to list them in my XRDS. Well, a little more than that, but it's definitely more doable today than ever before.

Thursday, September 11, 2008

Protected Sharing on the Open Web

One feature I have wished for on the web for quite some time is the ability to securely share family photos with my extended family and close friends. Currently, all photo sharing sites (that I’ve been able to find) require all parties to have an account at that photo sharing site in order to securely share the photos. Note that I don’t want the current solution of “security-by-obsecurity” where a big random URL is created and emailed to the group.

I think we can build a much better sharing environment using existing and emerging specifications like OpenID, OAuth, Portable Contacts and XRDS-Simple. Here is a use case and one way it could work.

I have an account at flickr and I create an album (flickr set) that I want to share with my extended family. Previously, I’ve associated my flickr account with my plaxo account (using OAuth) to enable flickr to access my contacts (via “Portable Contacts”). Flickr needs to use XRDS-Simple to find my “portable contacts” service and OAuth discovery to set up the connection between the two services.

  1. I tell flickr I want the new album (“Family photos”) protected and shared only with those people in my contacts lists that are labeled as “Family”.
  2. Flickr marks the album as “protected” and remembers that those allowed to view the album are anyone who is a member of my “Family” tag at my “Portable Contacts” service.
  3. I send out an email to my family members sending them the direct URL to the protected resource (note that flickr could also do this for me since it has a connection to my portable contacts service).
  4. A family member receives the email and clicks the URL to the protected album at flickr
  5. Flickr recognizes this is a protected resource and returns both the OAuth information for how to access the protected resource as well as HTML telling the user that the resource is protected and the user needs to authenticate
  6. The family member logs into flickr using their OpenID (not currently supported)
  7. Flickr takes the OpenID and asks my “Portable Contacts” service whether this OpenID has a tag of “Family” (basically a membership query; see previous post)
  8. If the user's OpenID is a contact with a tag of “Family” then they get access to the album, otherwise they are denied

What’s currently missing to make this a reality are...
  • Relying parties accepting OpenIDs
  • Users knowing they have an OpenID and using them
  • Portable Contacts adding “membership” type APIs
  • Portable Contacts supporting an explicit 'urls' type of 'openid'


In finalizing this blog post, I read David Recordon's summary of the Portable Contacts hackathon held last night. The following quote shows this is very near reality, Yeah!

Brian Ellin of JanRain has successfully combined OpenID, XRDS-Simple, OAuth, and the Portable Contacts API to start showing how each of these building blocks should come together. Upon visiting his demo site he logs in using his OpenID. From there, the site discovers that Plaxo hosts his address book and requests access to it via OAuth. Finishing the flow, his demo site uses the Portable Contacts API to access information about his contacts directly from Plaxo. End to end, login with an OpenID and finish by giving the site access to your address book without having to fork over your password.


Tagging for contacts

Has any one else had this problem? You need to IM someone and you can't remember which group you filed their name under. I realize that if I just kept an alphabetized list, and I remembered their name this wouldn't be a problem. However, sometimes it not that I’m looking for a particular person, but someone on a particular team.

Basically, what I’ve found is that I want to attach “tags” to my contacts that describe attributes about that person. Then I can find people by my own “folksonomy” whenever I need to. This would allow me to “query” my contacts (or IM client) by a “tag”. So I could say, “Show me all the architects at AOL that are currently online?”. It also allows me to do queries like “Does Bob have an ‘Extended Family’ tag?”. This is really a membership query and can be thought of as “Is Bob a member of the group ‘Extended Family’?”

Combining tags with contact data allows for all sorts of interesting capabilities. For instance, my Adium IM client could query my Portable Contacts service for all contacts with at least one IM identifier present and return all IM identifiers and tags. With this information Adium could auto-create groups, show me a “tag-cloud” of who’s online, etc. Another use would be true access-controlled sharing of protected resources. I’ll have another post on that soon.

With the emerging Portable Contacts specification, I think there is a great opportunity to enable this kind of capability in an open standard. The portable contacts spec already supports both tagging and filtering. What is a little unclear from a quick read of the specification is whether filters can be combined. However it should be easy with this specification to support the queries listed above. For a membership style query, the following should suffice...

/@me/@all?filterBy=urls&filterOp=equals&filterValue=http://bob.example.com&filterBy=tags&filterOp=equals&filterValue=ExtendedFamily


Wednesday, September 03, 2008

Continuing the discussion...

This post is in response to the thoughts Praveen posted on his blog regarding Open Identity Tokens.

These thoughts around an Open Identity Token are more focused on enabling sharing access-controlled resources than trying to duplicate existing OAuth functionality. One use case that OAuth doesn't currently solve is my desire to access a protected resource where I DON'T have an account at the service provider managing the protected resource. An Open Identity Token allows the service provider to allow access to the protected resource (not mine, but my friends) without my having to have an account at the service provider.

My vision was that an Open Identity Token could be passed as part of a standard OAuth invocation allowing multiple verifiable identities to be specified in the API call. The OAuth mechanisms bind the Consumer, the Service Provider and the Identity into a single token. This doesn't leave room for additional identities.


Some specific comments to the points raised follow...

"Bob’s discovery service" might not know Alice with the same Id (OpenID) - Bob might have only entered Alice's alternate email address that doesn't even resolve to the same OpenID that Alive used currently to sign in to HikingTrails.com


I think this problem is out of scope for identity tokens. This is really an identity "association" problem and a problem space that I believe portable contacts could grow into solving. Just like with Adium (IM client for the mac) I can merge multiple IM identifiers into a single identity, I should be able to do the same with portable contacts. I would love to see portable contacts grow into allowing membership based APIs so that a service provider could contact my portable contacts server and say... "Is this identity token a member of George's 'hiking buddies'?".

HikingTrails.com might need to go back to Alice's OpenID provider for each (notification) service that it wants to invoke on behalf of the user. Bob might use notification service A, David might use notification service B, and so on... - where A, B, etc.. totally different services.


I don't think that hikingtrails.example.com would need to go back to Alice's OpenID provider. However, it would need to go to each of her friends discovery "service" to find their notification service. One of the goals is to allow the dynamic distribution enabled by the "web". To me this is the benefit of having a discovery "service". Any person or service can contact my discovery service and find my preferred services (much like the use case you outline as the end of your post). While in many cases my identity provider may also be my discover service it doesn't have to be that way and the protocols shouldn't require that.

If the Identity/OpenID Provider provides a Open Token verification API, then it would have no way to make sure the token is being used at the same place for which it is granted. This goes back to the same problem that was solved by doing a RP Discovery in OpenID2.0.


Actually, I don't think the identity provider cares where the identity token is presented. The purpose of this identity token is to provide a "verifiable" identity (in the same vein that Amazon provides the "real name" feature). To me, the key is can the service provider that receives the identity token have confidence that this Consumer is "allowed" to send the identity token to the service provider. That's why the Consumer uses a nonce and a different hash value than is delivered to the Consumer by the identity provider.

This requires a real discovery service (process) instead of a simple XRDS (static) file hosted some where - since the services defined in the XRDS will be anyway protected, there shouldn't be any harm in saying "my notification service is here and oh btw, it's only open to a restricted list of people so you might not be able to send notification to me".


This is a great point. It does require more processing logic than just returning a file on disk. However, any protected resource requires a service, even when using OAuth so I don't see that as a big hurdle. Even if we separated the discovery information into public and non-public, it would still simplify the logic to be able to serve the non-public data based on an identity token versus a full OAuth UI experience.

Open Identity Token seems less trust worthy - of course the same problem that people attribute to OpenID but at least in OpenID case, it is not directly meant for a specific service invocation - it's merely for knowing who the user is and the RP/SP can do more things before it allows the user to do certain things.


I guess I'd argue that the trust of the token is dependent on a lot of factors. Does the service provider trust the identity provider? (this is that same trust question that OpenID faces). Can I trust the security of the protected token? (as described in my post it's probably only as good as OpenID "dumb mode", but that is pretty easily solvable). I'd also argue that there is great value in having a verifiable identity for certain operations. Yes, I can get a verifiable identity by using front-channel requests and asking the user to authenticate... again... but if it's not needed, why put the user through that experience? Also, using a Open Identity Token where a verifiable identity is required significantly reduces the number of interactions between the Consumer and Service Provider.
  • With an Open Identity Token...
    1. Consumer accesses protected resource at the Service Provider with Open Identity Token
    2. Service Provider verifies Open Identity Token with Identity Provider
    3. Service Provider performs access-control check and returns response

  • With standard OAuth...
    1. Consumer accesses protected resource at the Service Provider
    2. Service Provider returns error and requests authorization
    3. Consumer requests the RequestToken
    4. Consumer sends user to the Service Provider 'Authorize' endpoint
    5. The Service Provider uses the check_immediate method to attempt to authenticate the user (Assuming the Consumer sent the Service Provider an OpenID for the user)
    6. The OpenID Provider returns that the user is logged in
    7. The Service Provider invokes the Consumer's callback method (front-channel)
    8. The Consumer requests the AccessToken
    9. The Consumer re-tries the initial request for the protected resource (and gets access if the identity associated with the OAuth AccessToken is in the ACL)


In general in the current social networking era where things (notification) are more publish/subscribe model, not sure how important it is to solve this use case. Most of the user's anyway still use their email addresses not a notification service.


With an Open Identity Token, there is no requirement that the UserIdentifier value be the user's OpenID. It could be the user's email address, an opaque blob, a signed SAML Assertion, etc. Again, I consider the identifier association problem to be "out of scope" for identity tokens.

Thanks for the great comments. This is exactly the kind of discussion I hoped to start. One of my driving motivations in this is to enable easy access-controlled sharing such that my parents and in-laws don't have to have accounts at my personal photo service, and yet I can ensure that only the people I want can access my personal photos. I've never liked the security-by-obscurity model used for "privately" sharing my photos with others who don't use my preferred service.

Open Identity Token

Assuming that an “Open Identity Token” is useful, here are some of my initial thoughts.

  • The identity token needs to clearly identify the identity provider that issued the token, some value that identifies the user, and I believe the party the token was initially issued to (the Consumer in OAuth speak).
  • The value that identifies the user must support opaque values to prevent this token becoming a global correlation handle (if desired by the involved parties). Of course, the user identifier could be the user’s OpenID.
  • The identity token must be signed in some way and protected from replay attacks.


If we go back to the use case from yesterday’s post, using a Open Identity Token would enable the flow to work like this.

  1. Alice logs into hikingtrails.example.com with her OpenID
    • When hikingtrails.example.com invokes the OpenID flow, it asks Alice’s OpenID provider to return an Open Identity Token
    • hikingtrails.example.com receives the OpenID assertion and Open Identity Token
  2. Alice uploads a GPS track and some photos of a new trail she hiked over the Labor Day weekend.
  3. At the conclusion of her upload, hikingtrails.example.com asks Alice if it should notify her friends about her activity.
  4. Alice thinks that’s a great idea and agrees.
  5. So hikingtrails.example.com queries portablecontacts.example.com, using pre-established OAuth credentials, and retrieves Alice’s list of contacts with a tag of “hiking buddy”.
  6. Now for each of these friends, hikingtrails.example.com has to discover the “notification” service and send it the new activity message.
  7. One of Alice’s friends, Bob, only exposes the endpoint and metadata of his “notification” service to a restricted list of people
  8. hikingtrails.example.com queries Bob’s discovery service presenting the Alice’s Open Identity Token
  9. Bob’s discovery service validates Alice’s Open Identity Token and then returns the non-public service endpoint and metadata


In this flow, Alice does not have to interact via some user interface with Bob’s discovery service. Of course the identity represented in the Open Identity Token needs to be resolvable in to an identifier that Bob’s discovery service can use.

Finally, here are some initial technical implementation ideas...

I was thinking that the identity provider could construct the token and “sign” it with a HMAC_SHA? hash. The signature-base-string would be IdentityProvider:Consumer:UserIdentifier and the identity provider would construct a random value to use as the secret in the HMAC_SHA? hash. This value would need to be remembered based on the IdentityProvider:Consumer pair. What would be returned (probably base64’d) as the Open Identity Token would be “IdentityProvider:Consumer:UserIdentifier,hash”.

When the Consumer that receives the token wants to use it in an API call, it constructs a unique Open Identity Token (to protect against replay of the token) for the API call. This token uses the hash received from the Identity Provider as the secret in a new HMAC_SHA? hash. The signature base string for this hash would be “IdentityProvider:Consumer:UserIdentifier:Nonce”. What would go on the wire as the token would be base64(“IdentityProvider:Consumer:UserIdentifier:Nonce,hash”).

When a Service Provider receives the Open Identity Token, it can verify the token by sending it to the IdentityProvider specified in the token. For OpenID this would require a new extension and “API” method. Note that in the verification step, if the user identifier was opaque in the token it can be resolved into something the Service Provider can use. This allows for generating tokens that are unique to a specific context (no global correlation) while still providing the Service Provider with the data they need.

In this model, only the identity provider can validate the Open Identity Token because it is the only entity (besides the Consumer) that has the secret used by the Consumer in signing the token. All the identity provider needs to do is look up the hash it gave to that Consumer and then use it in a HMAC_SHA? hash of the “IdentityProvider:Consumer:UserIdentifier:Nonce” string and finally compare hashes.

I realize that going back to the Identity Provider to verify the token does have some drawbacks: privacy (leaking where this identity token is used), complexity and performance (an extra lookup/validation is required). But since I’m not a security expert, I’m hoping that others will be able to modify these ideas to allow for direct Service Provider validation. It’s just critical to me that the mechanism used to generate the Open Identity Token allow for both un-defined “circles of trust” (e.g. OpenID) as well as more closed or dynamic “circles of trust” (e.g. SAML). This might be as simple as leveraging the OAuth signature method and then support RSA signing.

Tuesday, September 02, 2008

Protecting "discovery" information?

I’ve been thinking a lot lately about discovery of personal services (e.g. endpoint and metadata of my “portable contacts” service, endpoint and metadata of my preferred “email service”, etc). One problem with enabling discovery of this kind of information is that it leaks information about me. For example, I might not want the world to know where I keep my personal photos?

So the question is... How, in the world of open identity protocols, do I restrict access to a subset of my “discovery” information? The obvious answer is for the discovery service (or service provider in general) to restrict access based on the identity of the invoking party. However, how is that invoking identity presented? At first thought is seems like OpenID and OAuth should suffice, but it turns out this doesn’t work to well in practice.

Let’s take the following example and walk it through.
“Alice logs into her hiking site, uploads a GPS track and photos, and notifies her friends of the new information.”
  1. Alice logs into one of her favorite web sites (hikingtrails.example.com).
  2. Alice uploads a GPS track and some photos of a new trail she hiked over the Labor Day weekend.
  3. At the conclusion of her upload, hikingtrails.example.com asks Alice if it should notify her friends about her activity.
  4. Alice thinks that’s a great idea and agrees.
  5. So hikingtrails.example.com queries portablecontacts.example.com, using pre-established OAuth credentials, and retrieves Alice’s list of contacts with a tag of “hiking buddy”.
  6. Now for each of these friends, hikingtrails.example.com has to discover the “notification” service and send it the new activity message.
  7. One of Alice’s friends, Bob, only exposes the endpoint and metadata of his “notification” service to a restricted list of people.

It’s at this point that things begin to break down. How does hikingtrails.example.com identify Alice to Bob’s discovery service so that hikingtrails.example.com can attempt to discover Bob’s “notification” service? There currently isn’t a binding for OAuth to be used with XRDS discovery, and even if there were, it would mean that Alice would have to have an “account” at Bob’s discovery service in order for the discovery service to be able to authenticate Alice and establish OAuth credentials. While this would only have to be done once with Bob’s discovery service, the user experience would have to be repeated with each of Alice’s friend’s discovery service. That seems like over kill for the simple purpose of identifying Alice to Bob's discovery service.

A possible solution would be an “open identity token” that could be created by an identity provider and passed to any service provider. I have some thoughts on this that I hope to expound on in another post.

Wednesday, August 06, 2008

Identity-based discovery for the masses

The ability to link people relationships to services in a very dynamic and distributed way is beginning to emerge. Envision a world where current social network relationships enable the discovery of personal services about any particular identity.

Consider the following use case: I want to notify (via the preferred mechanism of the recipient) all the people tagged as “close friends” in my “address book” or “social network”.

The problem is that while address books and social networks can know the relationship of “close friend”, they usually don’t have any way of knowing the “preferred mechanism of the recipient” to receive notifications. What’s needed is a way to “discover” via the identifiers in the “social relationship” the personal services of any particular individual.

So, what’s changing? A “new” community specification EAUT (Email-Address-to-URL-Transform) is being completed that allows an email address to be transformed (or mapped) into an OpenID. While this, in and of itself, is valuable for the adoption of the OpenID protocol, what I find really interesting is that since OpenID relies on XRDS for discovering the individual’s OpenID Provider, the mechanism is in place to discover any other service. This really enables an email-to-personal-service-discovery path that can launch a whole new set of use cases.

Even if the user doesn’t know anything about OpenID... the user’s social network could easily retrieve the OpenID’s for all of the user’s social relationships (since most are based on email address or have access to the email address). The social network could then provide additional services based on this discoverable information.

With most of the major identity providers already providing OpenIDs, all that's needed is for these players to support the EAUT specification.


[Disclaimer for my Liberty Alliance colleagues: These capabilities are already supported by combining the People Service and the Discovery Service for SOAP based deployments.]


Tuesday, June 03, 2008

Friend Classifications

A colleague of fine was mentioning recently that it's "hard to keep contacts separate (work/friends/family)" on all the different social networking sites. I couldn't agree more. This is made additionally hard by the fact that each "social network" site imposes their own "classification" of friends. These classifications are not mine and I have to morph my view of my contacts into these imposed rigors.

I would much prefer to be able to manage my contact classifications as "tags". Taxonomies force me into specifying that a contact can be in only one group. I have work colleagues that are also friends. If I could "tag" all my contacts with my own classification scheme and then use that when interacting with social networking sites, life would be much simpler.

I guess this is the "promise" of the "Open Social Web".

Thursday, May 15, 2008

XRDS for Information Cards?

One of the topics that came up at the most recent IIW (IIW2008a) in a couple of sessions, is the concept of allowing an RP/SP to describe it's services and requirements in a passive way. The goal being to allow "Identity Agents" to provide a progressively better user experience. A user should be able to interact with the RP in a normal "dumb" browser session, but also allow tools (e.g. an Identity Agent) to provide a much more seamless experience.

In an InfoCard Capabilities session, lead by Pamela Dingle, there was some discussion about how to do this technically. The following is a proposal for one way this might be accomplished.

The "Relying Party" would need to...
  • define an XRDS file describing identity protocols supported, services provided, and other metadata
  • allow for discovery of the XRDS file through normal XRI resolution (section 6)
  • additionally allow for discovery by adding a
    <link rel=”xrds.metadata” target=”location-of-xrds-file”/>
    in the <head> section of the pages provided by the relying party

The "Identity Agent" would need to...
  • look in web page dom for the xrds.metadata link
  • retrieve and parse XRDS document
  • find supported identity mechanisms and endpoints
  • invoke card selector
  • allow user to select card
  • post card to endpoint specified in the XRDS document


What could the XRDS markup look like for an InfoCard relying party?

First we need to define some <Type> URIs to represent different metadata characteristics of the relying party relating to InfoCards. Here are some possible examples...

<Type>http://infocardfoundation.org/policy/1.0/login</Type>
This type identifies the service/endpoint describing the claims required for login.
<Type>http://infocardfoundation.org/policy/1.0/registration</Type>
This type identifies the service/endpoint describing the claims required for registration.
<Type>http://infocardfoundation.org/service/1.0/login</Type>
This type identifies the service/endpoint where the login token should be sent.
<Type>http://infocardfoundation.org/service/1.0/registration</Type>
This type identifies the service/endpoint where the registration token should be sent.


An example XRDS document could look like...

<?xml version=1.0 encoding=UTF-8?>
<XRDS xmlns=xri://$xrds>
<XRD xmlns=xri://$XRD*($v*2.0) version=2.0>
<Type>xri://$xrds*simple</Type>
<!-- Service specification that identifies the endpoint of the infocard policy for login claims -->
<Service>
<Type>http://infocardfoundation.org/policy/1.0/login</Type>
<URI>http://sp.example.com/policy/login.xml</URI>
</Service>
<!-- Service specification that identifies the endpoint of the infocard policy for registration claims -->
<Service>
<Type>http://infocardfoundation.org/policy/1.0/registration</Type>
<URI>http://sp.example.com/policy/registration.xml</URI>
</Service>
<!-- Service specification that identifies the endpoint for submitting login claims -->
<Service>
<Type>http://infocardfoundation.org/service/1.0/login</Type>
<URI>http://sp.example.com/login</URI>
</Service>
<!-- Service specification that identifies the endpoint of the infocard policy for login claims -->
<Service>
<Type>http://infocardfoundation.org/service/1.0/registration</Type>
<URI>http://sp.example.com/registration</URI>
</Service>
</XRD>
</XRDS>

Thursday, May 08, 2008

How times change

I must be getting "old" as I find the following humorous.

The other morning my daughter said she was doing a report on "family life" in the 1980's. My wife (who likes to research things) grabbed her laptop and started looking information on the 1980s. She finds a site and reads something like...

In the early 80's, the computer game "Pac Man" was released in color making all previous black and white games obsolete.


To which my daughter responds... "What's Pac Man?"

Friday, May 02, 2008

Community Convergence

There has been much talk in our industry about Technical Convergence (i.e. moving toward one protocol for a particular task). However, before there can be Technical Convergence, there needs to Community Understanding and potentially Convergence.

That's why I'm excited to be attending the Internet Identity Workshop in Mt. View, CA the week of May 12. This is one of those places where Community Understanding takes place and sometimes even Community Convergence.

Hope to see you there!

Wednesday, April 09, 2008

Discovering OpenID Relying Parties

Yesterday Paul blogged about his experience logging into Wishlistr with his Yahoo OpenID.

But, when I tried to do so, Yahoo! showed me the following warning



What would Wishlistr need to do to 'confirm its identity' to Yahoo such that users wouldn't see this (likely enthusiasm killing) warning?


I commented on Paul's blog that it might have something to do with OpenID Relying Party discovery. Section 9.2.1 of the OpenID 2 spec defines how to verify the return_to URL in an OpenID authentication.

OpenID providers SHOULD verify that the return_to URL specified in the request is an OpenID relying party endpoint. To verify a return_to URL, obtain the relying party endpoints for the realm by performing discovery on the relying party.


I tried requesting the XRDS description from Wishlistr to no avail (curl --header "Accept: application/xrds+xml" -i -v http://www.wishlistr.com ). Section 13 of the OpenID 2 spec makes it a SHOULD for relying parties to support discovery. With the adoption of OpenID 2 just beginning to ramp up, relying parties supporting discovery may be a ways away.

Please note that this is just my guess as to what might be causing the warning. There are many other possible causes as well. Though I do believe that RP discovery is a key feature of OpenID 2.

Thursday, April 03, 2008

Identity API for the Internet Identity Layer

Patrick Harding has a great post today on "A Model for an Internet Identity Layer". He breaks down the Identity Layer into three sub-layers. It addition he points out that...

In addition, we have found that applications developers are spending far too much time concerning themselves with the lower levels of the identity layer. App developers need to be able to leverage a standard identity API interface that interacts with the claims sub-layer. The developer should receive all the information it needs via this API directly from the claims sub-layer. This information obviously manifests as claims and as such means that application, by default, must become claims aware. Today, this likely just means user attributes or a role value, but in the future this may include actual authorization decisions. Leveraging a standard API that allows an application to plug-and–play with the identity layer offers some future proofing as the identity protocols underneath change.


Interestingly enough, Phil Hunt talks specifically about this issue his post yesterday.

You see, the idea isn't just to support identity privacy and governance, but to create an application identity API (aka Attribute Services API) that allows applications to become decoupled these issues of having to support all the protocols and technologies out there. It lets the enterprise's decide how and when applications should access identity information and by what means.


The API the Phil references is the IGF AttrSvcs API that is part of the openLiberty project. I believe this API address a key component of the "Identity Interface" and "claims sub-layer". It's going to be very important to track policy at the "Identity Interface" and the Identity Governance Framework addresses these issues.

Wednesday, April 02, 2008

Trust relationships in OpenID

A while back I wrote the following to the OpenID general list serv.

As I see it there are 3 parties involved in the transaction: the user, the OP and the RP. There is some trust/risk factor associated with each relationship.


From the user's perspective they "trust" the OP (either because they want to spam and so are using an OP that makes "false assertions", or because they trust the OP to protect their authentication credentials and represent them correctly on the web). The user may or may not trust the RP, but by logging in they are making some level of trust/risk assessment.


From the OP's perspective the user represents some risk/value metric (too many "bad" users and the OP gets blacklisted). The OP protects that risk by potentially verifying email or cell number, supporting PAPE and other strong authentication methods, etc. The OP also has a risk/value metric with the RP though this is probably not super important right now. I can envision a smart OP warning me about authenticating to an RP that it some how determined is not "trustworthy".


From the RP's perspective, they have a risk/value metric on the user (e.g. Is the user going to be a good citizen of my community? Are they going to abuse the resources I provide? How much effort do I want to put into detecting "bad apples"?). The RP also has a risk/value metric on the OP (e.g. When the OP says they support the PAPE extension do they really do it?). Finally the RP has a risk/value metric on the resource/service they provide. From a business perspective I don't believe it's wise to blatantly "trust" the user if the resource/service is highly valuable (e.g. moving funds between accounts). Most users today don't have the sophistication to make good decisions.



Ok, so maybe I was a little unfair in my characterization of “most users”. I was trying to say that I don't believe many users know how to chose a good OP. In fact many will just use an OP they already have (which puts pressure on those OPs to be good citizens; that's a “good thing”). So, if an RP has a high trust metric with the user's OP, then they can more confidently trust the user as well. On the RP side its really an assessment of risk against the “User:OP“ pair.



Tags: ,

Thursday, March 27, 2008

"Open" Foundation Overload

With the creation of the OpenSocial Foundation, the DataPortability Group (they'll need a Foundation soon), the Information Card Foundation and of course the now established OpenID Foundation... I'm getting "foundationed" out.

I wonder if the community couldn't do something closer to OASIS and have one over arching Foundation with sub-groups working in each of these important areas. The IPR could be consistent for all the groups, but each group would have it's own "board of directors" and control the results of specifications and other efforts.

All the different focuses are important, and new areas will arise as the identity layer grows across the internet. However, convincing "management" to join multiple different organizations is a deterrent to participation.

Monday, March 24, 2008

Is AOL exploiting OpenID?

Today, Michael Arrington (of Tech Crunch) posted an article posturing that AOL (along with Microsoft, Google and Yahoo) are attempting to exploit OpenID by being OpenID Providers (OP) and not becoming OpenID Relying Parties (RP). I attempt to address a number of issues in this post below.
"By becoming Issuing parties, AOL and Yahoo hope to see their users logging in all over the Internet with those credentials. But they don’t accept IDs from anywhere else, so anyone that uses their services has to create new credentials with them. It’s all gain, no pain."

In addition to not being true (about AOL), the above statement doesn't make sense. There is little value in having to store a user's identity credentials and then verifying against them when it comes to identity management. A company's decisions around when to require a local account and when to accept 3rd party identities revolves around the risk of the resources being offered. If the 3rd party identity provider (in this case an OP) is trustworthy, then it's much preferrable to "outsource" the identity verification to that provider rather than deal with the security and privacy issues of storing credentials. Plus with OPs that support one-time-passwords, hardware tokens, etc, a RP can gain the benefit of strong authentication without having to implement the infrastructure themselves. So, it's not "all gain, no pain". In fact, requiring people to create account is PAINFUL (both for the company and for the user).
"Issuing parties make their user accounts OpenID compatible. Relying parties are websites that allow users to sign into their sites with credentials from Issuing parties. Of course, sites can also be both. In fact, if they aren’t both [OP and RP] it can be confusing and isn’t a good user experience."

Actually, I would disagree with this statement. The point of OpenID is to provide a user with a few identities (maybe one) that they can use at many web sites across the internet. This means that many sites will just be RPs and won't need to support the OP parts of the protocol. I do agree that the next wave of adoption will be more sites (large and small) becoming RPs.

For AOL, being an RP is important because it allows more people to use our services without requiring them to create yet another account with another password to remember. The more people that visit and interact with AOL services, the more successful AOL will be. Both ficlets and Circa Vie are OpenID relying parties and a substantial number of their users are 3rd party OpenIDs.
"It’s time for these companies to do what’s right for the users and fully adopt OpenID as relying parties. That doesn’t fit in with their strategy of owning the identity of as many Internet users as possible, but it certainly fits in with the Internet’s very serious need for an open, distributed and secure single log in system (OpenID is all three)."

I have two things in regards to this quote. First, it is not AOL's strategy to "own the identity of as many Internet users as possible". I've already stated why above. Second, there is another element that is key to the "Internet's very serious need" and that is "trust". Some call it reputation. It's great that OpenID 2.0 is open, distributed and secure (from a data-on-the-wire perspective). However, relying parties need to assess the business risk in regards to the resources (e.g. free storage, free domain names, free email) they are providing. With OpenID 2.0, it's possible to implement an OpenID Provider that claims using strong authentication to verify the user but in reality is not even requiring a password. This means anyone can sign up at any RP without needing an account at the OP. The RP needs to determine if the business risk to this kind of abuse is acceptable.

I believe it is this later case that is causing the larger companies to move more slowly when it comes to enabling all their services to 3rd party OpenIDs. Note that not even at Live Journal can you create an account with a 3rd party OpenID. What you can do at Live Journal is leave comments and be added to friend's lists.

[Disclaimer: For those that don't already know, I work for AOL.]

Tags: ,

Wednesday, March 05, 2008

Authentication for clients

Today AOL relaunched the OpenAIM initiative. One of the key parts of this effort is supporting authentication for clients (whether those clients are native code or FLASH/AIR/Silverlight/etc). To enable this we added a 'clientLogin' API to our OpenAuth suite of APIs.

This support is to enable a user experience that comfortable to AOL's existing members. If a user installs a client application, it is expected that any authentication that is needed is done through the client. To secure this method we use the password and a session key (transfered via SSL) to sign all requests from the client application. This ensures that all valid requests are coming from a client where the user entered their password. In addition, all API calls are monitored for abuse.

For the signing mechanism we used the OAuth signature base string method. However, given that we already had parameter names (in existing APIs) that map to the OAuth parameter names, we used the existing parameter names in favor of the OAuth names. Otherwise, the logic is the same.

I realize that from a pure privacy and security perspective, the OAuth flow of "popping" a browser and having the user only enter their credentials at the "owning" IdP is better. However, for many of our customers this is an unexpected user experience. clientLogin enables the desired user experience.

How times change

A couple nights ago my teenage son was on the family computer “working” on facebook.

So I asked him “How are your friends doing?” (trying to be a good parent and get my kids to talk:) ).

My son answers “I'm trying to help Judson and Samuel be friends” (names changed to protect the innocent).

I ask, “Why, are they not getting along? Did something happen?” (rather surprised).

“No”, he answers, “they can't find each other on facebook.”


I should have known.

Wednesday, February 13, 2008

Valentine's one day early

I'm not sure what it is about Valentine's Day and Virginia but this is the second year in a row.



Wednesday, February 06, 2008

More on Email to URL discovery

Just a couple more thoughts on this topic.

  1. I believe that the mapping from email domain to XRDS URL should include some addition pattern to make deployments easier. Deploying a servlet or other mechanism on the root email domain is not always easy. Having a different pattern such as xrds.my-email-service.domain would simplify deployment issues.
  2. While it is nice that an email to URL mapping service can provide user-centric identifier management, I don't know how much it will be used by the general public. Discovering that the email provider is also an OpenID 2.0 provider would be enough to start a directed identity flow. This I believe would be more convenient for most users.

Tuesday, February 05, 2008

Email to URL discovery

In the ongoing "world" of IdP discovery a new proposal has been made by Brad Fitzpatrick on his blog. This proposal provides for a direct mapping between an email address and a personal URL (e.g. an OpenID). The mapping between the email address and personal URL is provided by an email2url_mapping service run by the email provider. Discovery of the email2url_mapping service endpoint is found via XRI URL resolution on the domain of the email address.

Whether the user needs the full indirection capability to have an arbitrary mapping between email address and URL or is fine with allowing the email provider to be their OP/IdP is yet to be seen. It is encouraging to see consumer's UX needs being addressed.

Saturday, January 19, 2008

Looking at the evidence...

... it seems that we still have a ways to go when it comes to user education, user-centric identity and IdP discovery. I applaud Yahoo! and Blogger for supporting OpenID by being OpenID Providers. That is a huge step forward. However, it's interesting to note how these main stream relying party (RP) sites are implementing the user experience.





From the OpenID listserv it appears that Yahoo! would prefer RPs to put a Yahoo! logo on their site that is clickable to enable Yahoo! users (and others) to login to that site (using the "directed identity" flow).

Also, looking at the Blogger implementation of accepting OpenIDs they list 4 main OpenID providers (I'm guessing Yahoo! will be added to the list) and then a button for "Any OpenID".

Maybe lesser known, but propeller.com (an AOL property) which accepts OpenIDs uses the OpenID protocol to authenticate AOL/AIM users but presents the UI as "Sign in using my AOL Screen Name".

What I find fascinating about this trend is that it bypasses one of the benefits of an OpenID (built in IdP discovery). Basically, these main stream RP sites are using the "User picks their IdP" solution for determining where to send the user rather than having the user type in their IdP (yahoo.com, openid.aol.com, etc) or full OpenID URL. At the moment, this scales OK as there aren't that many mainstream providers, but either user education needs to get better so this mechanism isn't needed, or we need a different technical solution.

WebGuild Web 2.0 Conference

I'm participating in a panel at WebGuild's Web 2.0 Conference and Expo being moderated by Johannes. The panel will be discussing OpenID and OAuth among other things. It should be a good discussion given the recent announcements by Yahoo! and Google.