Wednesday, February 22, 2006

Search Keywords - Relationship between Pipes-and-Filters and Decorator design patterns

An article on the implementation of both Pipes-and-Filters architectural design pattern and Decorator pattern to address similar solution in .NET 2.0.
For more details and source code for my article please go to this Location on Codeproject.


The article provides an implementation of the simple problem using both Pipes-and-Filters architectural pattern and Decorator pattern, and explores a relationship between them, if any.

Pipes-and-Filters Pattern - An architectural design pattern


The Pipes and Filters architectural pattern provides a structure for systems, having components that process a stream of data (filters) and connections that transmit data between adjacent components (pipes). This architecture provides reusability, maintainability, and decoupling for the system processes having distinct, easily identifiable, and independent but somehow compatible tasks.
The usage of Pipes and Filters pattern is limited to systems where the order in which filters are processed is strongly determined and sequential in nature. The pattern applies to problems where it is natural to decompose the computation into a collection of semi-independent tasks. In the Pipeline pattern, the semi-independent tasks represent the stages of the pipeline, the structure of the pipeline is static, and the interaction between successive stages is regular and loosely synchronous. A pipeline is a definition of steps/tasks that are executed to perform a business function. Each step may involve reading or writing to data confirming the “pipeline state,” and may or may not access an external service. When invoking an asynchronous service as part of a step, a pipeline can wait until a response is returned (if a response is expected), or proceed to the next step in the pipeline if the response is not required in order to continue processing.
Use the pipeline pattern when:
  • You can specify the sequence of a known/determined set of steps.
  • You do not need to wait for an asynchronous response from each step.
  • You want all downstream components to be able to inspect and act on data that comes from upstream (but not vice versa).
Advantages of the pipeline pattern include:
  • It enforces sequential processing.
  • It is easy to wrap it in an atomic transaction.
Disadvantages of the pipeline pattern include:
  • The pattern may be too simplistic to cover all cases in business logic, especially for service orchestration in which you need to branch the execution of the business logic in complex ways.
  • It does not handle conditional constructs, loops, and other flow control logic as it's mostly sequential in nature.



The filters are the processing units of the pipeline. A filter may enrich, refine, process, or transform its input data.
  • It may refine the data by concentrating or extracting information from the input data stream and passing only that information to the output stream.
  • It may transform the input data to a new form before passing it to the output stream.
  • It may, of course, do some combination of enrichment, refinement, and transformation.
A filter may be active (the more common case) or passive.
  • An active filter runs as a separate process or thread; it actively pulls data from the input data stream and pushes the transformed data onto the output data stream.
  • A passive filter is activated by either being called:
    • as a function, a pull of the output from the filter.
    • as a procedure, a push of output data into the filter.


The pipes are the connectors having the following links between a data source and the first filter, between filters, between the last filter and a data sink. As needed, a pipe synchronizes the active elements that it connects together.

Data source

A data source is an entity (e.g., a file or input device) that provides the input data to the system. It may either actively push data down the pipeline or passively supply data when requested, depending upon the situation.

Data sink

A data sink is an entity that gathers data at the end of a pipeline. It may either actively pull data from the last filter element or it may passively respond when requested by the last filter element. 
For more details and source code for my article please go to this Location on Codeproject.