directory-api mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel L├ęcharny <>
Subject Re: Client API Schema support
Date Fri, 23 Jan 2015 14:53:29 GMT

>>> What I would need at this point is an exact description of what you want
>>> to do in order to drive you toward the various classes of teh API.
>> Maybe if you can describe how to support OpenLDAP server there .... I
>> think I can figure out the rest.
> I'll do that after dinner.

Long lasting dinner, followed by side tasks...

So basically, the starting point is the
LdapNetworkConnection.loadSchema() method. This is where the connection
looks for the remote schema.

Here, you have two options :
- either the LDAP server stores the information in the rootDSE
- or the server supports multiple subschemaSubentry , and you have to
get it from an entry.

Let's assume we are dealing with case #1 (I don't know any server
supporting multiple subschemaSubentry, but dealing with that is not
really too much complex : we just need a way to tell teh schema loader
where to load teh schema from, instead of assuming it will pump it from
teh rootDSE).

Anyway, the schema loading is done in the DefaultSchemaLoader class :

    public DefaultSchemaLoader( LdapConnection connection ) throws

            // Getting the subschemaSubentry DN from the rootDSE
            Entry rootDse = connection.lookup( Dn.ROOT_DSE,
                SchemaConstants.VENDOR_NAME_AT );

The next step, absolutely critical, is to determinate which kind of
server we are dealing with. This is done with :

            if ( rootDse != null )
                // Checking if this is an ApacheDS server
                if ( isApacheDs( rootDse ) )

This is where you are, atm. The thing is that we would liek to have this
part dynamic, instead of having it hard coded. That would be actually
way better to have a module that deal with each different server, based
on what we get in the rootDSE vendorName attribute. Alas, it's not easy :
- first, not all the servers implement this Attribute
- second, we stiull have to find a way to map the value contained in
this attribuet with the right module (ie, ApacheDSSchemaLoader,
OpenLDAPSchemaLoader, etc)

We can discuss this aspect, but IMO, it's not really a critical one, as
there are not thousands of vandors, nor thousands of new servers being

That being said, we are now at the "// TODO Handle schema loading on
other LDAP servers" part. The next step is to implement an equivalent of
teh loadSchemas() method, for other servers.

Let's see what it may looks like for OpenLDAP. First, it may be named
loadOpenLdapSchema(). What it will do is to grab all the various values
of the subschemaSubentry entry (OC? AT, etc). We have more schema
elements in ApacheDS than that are supported by other LDAP servers.
Typically comparors, normalizers and syntaxCheckers are not part of teh
specification (RFC 4512). What we can expect to get from this entry is :
- objectClasses
- attributeTypes
- matchingRules
- matchingRuleUses
- ldapSyntaxes
- ditContentRules
- ditStructureRule
- nameForms

Each one of those elements, which are specified in to #section-4.2.7,
contain potentially hundreds of values which format is *supposed* to
respect the specification from to

Let's see what is being done for AttributeTypes. Here, we call  :

        // Load all the AT
        Attribute attributeTypes = subschemaSubentry.get(
SchemaConstants.ATTRIBUTE_TYPES_AT );
        loadAttributeTypes( attributeTypes );

which iterates across all the values (which are AttributeType
descriptions) :

        for ( Value<?> value : attributeTypes )
            String desc = value.getString();

            AttributeType attributeType =
AT_DESCR_SCHEMA_PARSER.parseAttributeTypeDescription( desc );

            updateSchemas( attributeType );

Two steps :
- parse what we get
- store the result in the schema.

Note that at this point, we *don't* check that the schema is valid or
not. Typically, we don't check that an AT we load as a valid SUP,
because this SUP may very well not have been read yet. We are just
loadind the schema.

The parsing is done with dedicated parsers :
AttributeTypeDescriptionSchemaParser for AT (the AT_DESCR_SCHEMA_PARSER
constant is an instance of such a parser, which is a singleton) :

    private static AttributeTypeDescriptionSchemaParser
AT_DESCR_SCHEMA_PARSER = new AttributeTypeDescriptionSchemaParser();

The logic is simple :

    public synchronized AttributeType parseAttributeTypeDescription(
String attributeTypeDescription ) throws ParseException
        reset( attributeTypeDescription ); // reset and initialize the
parser / lexer pair

            AttributeType attributeType = parser.attributeTypeDescription();

            // Update the schemaName
            updateSchemaName( attributeType );

            return attributeType;

We do use an antlr parser for every element. A dedicated parser will
either uses its own antlr grammar, or a hand written parser, assuming
that it produces valid AttributeType/ObjectClass/... instances.

This is it. Everything else is just internal magic, that is irrelevant
for you at this point.

Is it enough explaianation, are there some poiints you'd liek me to
explain ? If you like, we can work hands in hands for the first schema
loader, I'll be pleased to help !

View raw message