A Rules Validator component is a software piece which check an object or entity properties- values pairs, against a set of rules or constraints. It’s a very common concept that encapsulate conditional logic of the type «If <boolean condition> then, etc.» in an object. Using the Rules validator you would write instead: If rulesValidator.Validate(ruleName, objectTobeValidated) then, etc. As it’s expressed, the syntax is equivalent, but the flexibility is enormous, because you could validate at runtime any kind of data from objects, based on dynamic rules.
The elements of the whole solution are as follows:
- Rules metadata. A set of rules in a particular syntax. A rule have constraints expressed as boolean statement or expressions.
- RulesValidator. A Command object with a method for Validate, or Check operation. It should receive the objects which properties need to be validated and the rule name against they would be reviewed.
- ObjectToBeValidated. A concrete object or entity, or a set, to be evaluated. The values assigned to this object properties would be evaluated by the Validator with the set of rules.
In the following lines I will describe and explain the implementation details of the project at Rules Validator. This GitHub project is a C# .NET Framework solution of a simple Validator implementation. In this post, I will touch the rules metadata expression and the main validation interfaces. In further posts, I will describe the implementation details of this solution.
Let me show you the details.
How to express the rules?
The rules are statements of the kind: property/operator/value or the more general subject/predicate/object. These are simple boolean expressions, for example, Age > 21 (property|subject = [Age]; operator\predicate = [>]; value|object = ), or Senior = True (property|subject = [Senior]; operator\predicate = [=]; value|object = [True]). They could be implemented with relational database records (columns: properties, operators, and values), as an XML document (elements and attributes), or any other language that capable of this syntax (RDF, e.g.).
In our case, the project use an XML document type file with the following format:
Let me briefly explain the XML elements and attributes:
- <rulesMap> – It’s the root element that encloses all the rules set.
- <rule> – A Rule element. It has the set of constraints.
- name – rule’s name. It should be a human-understandable name: MothersDayDiscount or SeniorOffer.
- <constraints> – The collection of constraints. It should have the real rule expressions set.
- <constraint> – The rule proper expression.
- name – the name of the constraint. As with any name, is preferable to declare the rule intentions: MoreThanFiveDollars, MaleOnly, FivePercentDiscount.
- property – It’s the object or entity property or field which value would be validated.
- operator – The exact operator to be used for validation.
- value – The true value or values that the property must have for assert the validation.
- valueType – The data type of the object that has the property to be evaluated. It’s language or platform specific most of the cases. You can use whatever you want, but the caller must understand and manage them.
- typeToValidate – It’s platform or user specific. You can use whatever programming environment type or a user type. The caller must know the types involves to catch them during the real validation process.
How this should work?
A RulesValidator must interpret the valueType and the typeToValidate values. Also it must translate the values of the operator attribute (equal, greaterThan, etc.) to the inner operators of their framework or platform. This part will require to understand and manage the Reflection namespaces. As it’s said above, the validator is a Command object. The Client or Caller will call the component whenever a condition requires to check some rules. The RulesValidator could be implemented as a Strategy allowing the Client to call the concrete validator object based on another variable condition (day of the week, customer type, weather, etc.) or platform. Also, there are opportunities to implement a Chain of Responsibility for more complex validations. In that case, a controller component should distribute the objects to be validated to a tree or a chain of objects, every one reviewing rules against the object’s property-value pairs, and return the results. At the end of the chain, the validation must return a boolean value.
Programming to an interface, not the implementation
One of the must valuable principles a programmer should receive during his or her career is:
«Programming to an interface, not an implementation» —GoF
But, what does this axiom really means? The real programming task is to design and write algorithms using standard data structures. A programming solution should describe the algorithms it uses to solve the task ahead. Also, you should avoid the well-known «hard-coded» architectures. By that, I mean the common ensamble of different implementation modules (concrete classes) relegating abstracts or interfaces to special uses only. The best remedy to this illness is the extensive use of DIP.
That is why in the discussed project I implement an assembly with abstract components (Klod.Validation) as the main entrance of the solution (a Façade), and the only liaison with the Client. The Client then doesn’t have any requirement to connect with an implementation package or assembly, protecting from variances in the future (Protected Variations principle). The abstract component should also have the logic and mechanisms to dynamically loading the concrete implementations through Factories and Factory Methods.
Now, let me show you the main base classes.
RuleManager abstract class. This class is the main entrance of the package. It’s an implementation of the Command design pattern, and follows the Controller GRASP principle. It has a RulesRepository class private field to store the set of rules to be checked. Its main method is Validate() which receives the name of the rule to be validated and the collection of objects, which property-values combination should be checked.
Review the simple algorithm of the Validate() method:
The RuleManager gets the rule from a Rules repository by its unique name, and let the RuleBase object to check the objects against the specific rule. Notice that the RuleManager delegates the concrete evaluation to the RuleBase object, because it is the Information Expert. It is also implemented as a Command.
RuleBase abstract class. The core of the solution is the RuleBase abstract class. As you might suppose, it represents a rule. But against the singular representation of a single constraint, the RuleBase represents a set of constraints. This representation of a rule as a collection is flexible and extensively reusable. You can settle a group of sentences under a single name (SeniorDiscount, e.g.) with one or more validation constraints.
The inner fields of this class are a dictionary to collect the constraints and a representation of the rules metadata file:
- private Dictionary<string, ConstraintBase> _constraints;
- private RuleMap _ruleMap;
The inner representation of the constraints as a Dictionary allow us to attach a unique name to every instance of a constraint. This is the proper identifier of the rule. The RuleMap is a base class, representing a set of constraints metadata.
Along with the constructor of the class, the most important methods are the Check() and the LoadConstraints() abstract methods. The implementation details have been deferred to the subtypes, or implementers. There is an apparent good-practice violation when using virtual methods in base classes constructors in the call of LoadConstraints() method. The issue is that calling overridable methods in base class constructors could result into calls of derived classes before the base class is fully constructed and values completely set. I said this violation is apparent, because beside the Template method in constructors: a special pattern clarification, the call in this case is to an abstract method, with no implementation or version in the base class. So, no overriding takes place.
RuleMap abstract class. This class represents the whole collection of constraints expressions. It’s a classic POCO class. As you notice, it only has arrays and fields for every attributes and elements in the rules metadata XML document.
ConstraintBase abstract class. This class represent a single constraint. Besides the properties of the constraints it has several interesting abstract methods:
- IsValid(). Receives an object and validates against the constraint.
- ParseConstraintString(). Delegate to sub-types the parsing of the metadata from the rules source.
- SerializePropertiesToConstraintString(). A convenient method to serialize the class properties and express a readable string of the constraint.
RulesRepository abstract class. This class represents a collection of rules. It stores them in a single Dictionary field for the rules set. Its sub-types should implement the logic to load the metadata source (LoadRuleMaps) and to find a rule based on its name (GetRule).
How the implementation classes are instantiated?
Now the dynamic part of the solution is: how to call all this abstractions from clients?
Let me present the:
RulesValidationFactory static class. This factory class is the core of a DIP implementation. The abstract package or assembly should have the mechanisms to call the implementers or sub-types concrete classes with this pattern or any other Creator pattern. It contains Factory methods for any concrete class it must instantiate (MakeRulesManager, MakeRuleMap, MakeRulesRepository). This class use extensively the Reflection namespace of used platform. It must locate an assembly and instantiate at runtime the objects to be used.
Next post: the Klod.Validation.Classes package
That’s all for now. In the next post I will describe details of the concrete implementation. One of the many advantage of this architecture is that you can change or test new implementations without damaging effects on the callers. For example, you could change the representation of the rules using other mechanisms as Web Services, databases or also programming objects. The implementation could hide also the platform differences as you can encapsulate them through a standard (HTTP messages on the Web, e.g.), as it’s the case in mixed environments from .NET and Java.