Sunday, September 25, 2016

Interesting weekend of September and Fibonacci series



Preamble:

This post is about an alternative way of formulating Fibonacci series especially useful for summing even valued terms. The next paragraphs in this preamble may be of little importance for majority of you and thus might want to skip and jump to next section directly...

September 2016 has not been that great to me and my beloved city Bangalore (the city i live and love to live with)and in general to the state i belong to. More so the last weekend of this month to me greeted me to an unusually chilly Saturday morning. Unusual; because i dont remember in the last few years about onset of cold this early and i have been saying this to myself for 100th time since the start of September. The continuous indifference of people towards lake and ecological conservation has seemed to rage enough the mother nature for she seemed to unleash the fury by sending in an untimely cold onset and further determined to demonstrate that weatherman was plain wrong this time  about his forecast on due rains. Then as if; to  add salt on the wound;  the jury's verdict on the long standing river water sharing row with our neighboring state caused flare-up upsetting the already parched civilians of our state. I feel sad to be in the situation where  the basic needs of drinking water had to be requested by one and all for securing the same. Naturally the city simmered and  as i could see the worry writ large on everyone's face; reading just one common feeling that Did we just loose our rights to basic needs?

Well this was happening from since 2 weeks; the cab ride back home from office on that Friday(previous day); was filled up with all kinds of discussions mainly around the current water issues faced by public at large.Stranger are the ways the mind works; i was feeling; when all of a  sudden a momentary flash  swept my mind frame with  a curious problem on summing even-valued fibonacci terms . This was a problem that some one described to me over a call during lunch that day. But then I wasnt sure to pursue thinking on that while the entire mass in the cab was gravitating towards the center of the raging debate on water dispute. Though my head showed little curiosity at the fibonacci problem for a short-while; my heart came pounding to push me into the center of gravity of the issues that cab was engrossed with. So after a while; finally; head gave in to the heart and thus junked this fibonacci problem into my memory backyard throughout the journey and infact for a good part of the ensuing night.

The general mood was further weather-beaten with early onset of chill in and had me coiling up lazily, cozily wrapped within softrug that iam used to through the friday night till the weehours of saturday . While the sub-concious kept on nagging about the  discussions related to the treatment meted out to us through wee-hours; i was desparately looking for my daily booting sequence to arrive (its how one of my colleague nicely sums it). Of-course; You wont second guess this!  with a native Bangalorean!.  Would You?!!. It better be the coffee made from decanted decauction created straight out of freshly ground coffee beans (more specifically to me; coffee beans from chikamagalur).  

Lo!, Just then some events unfolded at home when i had to force myself up from bed to attend to some immediate houshold chore. (Its plain simple; my kids had'nt yet moved the cleaned up utensils from backyard to kitchen and my wife was getting late for school ; this further delayed the food preparation etc... the rest you can fill up the blanks). As primary fail-over entity in these situations i was forced to get it done while kids and thier mother are to leave for school without the brunch.  Coupled with the absence of coffee and the fireworks just witnessed, i am standing in the backyard; that cold morning; staring half asleep at a pile of utensils!. Oh Boy! It didnt help at all!!.  My cranium ached and longed for something different to tide over this and there it was!. Instead of inspring to get to tkitchen to make myself the cofee, it actually bounced back; of all the things from its backyard; the problem on how to sum up even valued terms of fibonacci series. What a timing! and situation!.

Fired up a bit over on that i tried mentally imagining the terms of the series hoping to make my coffee less situation a bit better. No sooner the terms started enumerating i started delving into realms of the series by asking myself:  what could be so special about fibonacci? Ofcourse we all know that every term is sum of last two terms. A plain for-loop with all even value detected and have blind sum of the same. So, what is special about even values to be asked in an interview? why did they not ask odd numbers??.So quickly finishing the agreed chore came back to my laptop and started tapping out first ten fibonacci terms to  gaze at it for some time on why some one is interested in even valued terms!. 

Sorry to bother with my preamble..and without much ado will jump for the crux.

Fibonacci series generation


The Fibonacci series as we all know; is constructed with every term made from summation of its previous 2 terms in the series(thats obvious stated again isnt it). So typically we could express it as follows:

Given :

  1. N as the whole number space
  2. Z as the integer space
  3. Z+ and Z- as the positive and negative integer space.
  4. i as the index of the term in the series
  5. F(i) as the ith fibonacci term


For all i in N (N is whole number space)
F(i) = i where i in [0,1]
F(i) = F(i-2) + F(i-1) where i  in [2, INF)

If we tabulate for first 10 numbers the table looks as follows:

i F(i) Std form
0 0 Assumed base value
1 1 Assumed base value
2 1 F(0)+F(1)
3 2 F(1)+F(2)
4 3 F(2)+F(3)
5 5 F(3)+F(4)
6 8 F(4)+F(5)
7 13 F(5)+F(6)
8 21 F(6)+F(7)
9 34 F(7)+F(8)
10 55 F(8)+F(9)

Now, let us extrapolate the series a bit into the negative indexes.

As we know any ith term can be expressed as the sum of two preceding terms i.e F(i) = F(i-2)+F(i-1) and hence by inference we could also say in the series contiuuum;  that an ith term F(i) is also the difference of two succeeding terms i.e F(i) = F(i+2) - F(i+1).

Therefore if the above table can be rewritten with this other form as well;the values in the negative scale (i.e negative indexes) presents us an interesting sign-alternating form of the positive term series as shown below.

i F(i) Std form My form
-4 -5 F(-6)+F(-5) F(-2)-F(-3)
-3 +2 F(-5)+F(-4) F(-1)-F(-2)
-2 -1 F(-4)+F(-3) F(0)-F(-1)
-1 +1 F(-3)+F(-2) F(1)-F(0)
0 0 F(-2)+F(-1) F(2)-F(1)
1 1 F(-1)+F(0) F(3)-F(2)
2 1 F(0)+F(1) F(4)-F(3)
3 2 F(1)+F(2) F(5)-F(4)
4 3 F(1)+F(2) F(6)-F(5)
5 5 F(2)+F(3) F(7)-F(6)
6 8 F(3)+F(4) F(8)-F(7)
7 13 F(4)+F(5) F(9)-F(8)
8 21 F(5)+F(6) F(10)-F(9)
9 34 F(6)+F(7) F(11)-F(10)
10 55 F(7)+F(8) F(12)-F(11)

A casual inspection of this series presents that the even valued term  and odd valued terms are not haphazard and every even valued term is surrounded by an odd term on both its sides. Though not particularly fascinating to a mathematically trained eye; this fundamental nature in this particular series lends a fresh/different perspective to an ordinary programmer like me

Continuing back on our trail of deductions; you can say that Given i belong to {Z}, the ordinate F(i) in this coordinate plane can be defined now as:

F(i-2) + F(i-1) = F(i) = F(i+2) - F(i+1).       ---------------  (1)
F(-i) = (-1)i+1 F(i)   for all i belong to {Z+} --------------- (2)


Now for the interesting observation; Consider the even term alone.


If you observe carefully; the even valued term repeats every 3rd term and curiously enough each such even valued term bears a enigmatic relation with two previous even valued terms. So for instance, the 9th term has an interesting relation with 6th and 3rd terms. Let us explore this a bit

For eg:

F(12) =144 = 4 * 34 + 8 = 4 * F(9) + F(6)
F(9)   =  34 = 4 *   8 + 2 = 4 * F(6) + F(3)
F(6)   =    8 = 4 *   2 + 0 = 4 * F(3) + F(0).

Hmm....

Try further down when indexes become -ve. So for eg:

F(3)  =   2  = 4 * 0 + 2     =  4 * F(0)  + F(-3)  =  4 * F(0) +  F(3)  [ By rule (2) F(-3) = (-1)3+1F(3)]
F(0)  =   0  = 4 * 2 +(- 8) =  4 * F(-3) + F(-6) =   4 * F(3) -   F(6)  [ By rule (2) F(-6) = (-1)6+1F(6)]
F(-3) =  2  = 4 * (-8) +(34)=4 * F(-6) + F(-9) =   4 * -F(6) + F(9)  [ By rule (2) F(-9) = (-1)9+1F(9)]
F(-6) =  -8 = 4 * F(-9) + F(-12) = 4 * F(9) - F(12) = 4 * 34 - 144 [ By rule (2) F(-12) = (-1)12+1F(12)]

This was the first glimmer of smile on that day that something i had not noticed for a long was staring at me!! smilingly

What further interested me is this very same beautiful relation exists with odd valued terms as well!!

So
F(7)  =  13 = 4 * 3 + 1 =  4 * F(4) + F(1).
F(8)  =  21 = 4 * 5 + 1 =  4 * F(5) + F(2).
F(10) = 55 = 4 * 13 + 3= 4 * F(7) + F(4).

Again with negative indexes;

F(5)  =  5  = 4 * 1 + 1  =  4 * F(2) + F(-1)
F(2)  =  1  = 4 * 1 + (-3)= 4 *F(-1) + F(-4) = 4 *F(1) - F(4)

So I figure that regardless of whether a term's value is even or odd the Fibonacci series exhibits a relationship such that every ith term can also now be formulated from the (i-3)rd and (i-6)th terms in the following manner as explained and defined below in the proof:

Mathematical proof

OK..just thought to satiate the mathematical rigor by proof in addition to getting satisfied with spreadsheet way of proving..


F(i-1) = F(i-3) + F(i-2)                             ----------------(3) by simple application of (1)
F(i-2) = F(i-4) + F(i-3)                             ----------------(4) by simple application of (1)

F(i+2) = F(i-2) + F(i-1) + F(i+1)                             ---- (5) by re-arranging terms in (1)

F(i)     = F(i-4) + F(i-3) + [F(i-1)]                             ----(6) by inference using (5) for F(i)
F(i-1)  = F(i-5) + F(i-4) + F(i-2)                             ------(7) by inference using (5) for F(i-1)
F(i-2)  = F(i-6) + F(i-5) + F(i-3)                             ------(8) by inference using (5) for F(i-2)

F(i) = F(i-4) + F(i-3) + [F(i-3) + F(i-2)]                ------ (9) substituting F(i-1) by (3)

So
F(i) = F(i-4) + 2 * F(i-3) + [F(i-2)]

F(i) = F(i-4) + 2 * F(i-3) + [F(i-6) + F(i-5) + F(i-3)] ---(10) by substituting F(i-2) with (8)
F(i) = 3 * F(i-3) + F(i-6) + [F(i-5) + F(i-4)]

Again since [F(i-5) + F(i-4)] = F(i-3)  by simple application of (1)

therefore

F(i)  = 3 * F(i-3) + F(i-6) + [F(i-3)]


F(i) = 4*F(i-3) + F(i-6)




Solution to the even-term summation

Now it dawns on the importance of why folks were interested in the even valued terms.

A simple loop that jumps at interval of 3 can reduce the looping by 2/3rd the original number of iterations. Meaning; with first 2 seed even valued terms which is 0 and 2; i can now generate a continuos set of even valued terms directly by applying the formula FEven(i) = 4*FEven(i-1) + FEven(i-2).

Ofcourse please be advised the code below is a simple indicative pseudo code to only highlite the fact that a different formulation exists to generate every 3rd term of the series and thus can reduce potentially on the complexity of computation.


int fibonacciConstantValues[] = {0,  2};


private int fibonacciEven(int i){

    if(i less than 2) return fibonacciConstantValues[i];
    else return 4 * fibonacciEven(i-1) + fibonacciEven(i-2)
}


/** Sum to N even valued terms. **/

int sum=0;
for (int i=0; i less than N; i++) { sum += fibonacciEven(i); }

Or even if you just need to build a complete fibonacci series then however this may not have edge on computational speed over the traditional form but is mentioned for the sake of completeness:

int fibonacciConstantValues[] = {0, 1, 1,  2,  3, 5};
private int fibonacci(int i){
    if(i less than 6) return fibonacciConstantValues[i];
    else return 4 * fibonacci(i-3) + fibonacci(i-6);
}


Summary

In summary an interesting insight opened up within myself about this great series and that insight ;i had not heard anywhere (in my college or elsewhere). This information could be possibly lying in a mathematical journals ; however i feel it could have been atleast worthy enough a mention in some programming texts. Anyways its not always I would get up to a cold morning in september witnessing my better half yelling at kids but indirectly making me do an uninteresting house chore and then most of all thinking of a different avatar of Fibonacci series. I sign off saying that it was well worth of a weekend enough to carry it as a nostalgia into my future. Thanks for reading through  my thoughts this far








Friday, March 11, 2016

Slf4j Custom Logger using customized toString

Introduction:

The object details such as its attribute values during course of program run is extremely valuable during troubleshooting. This amount of information usually turns out as too voluminous or as sparse. In order to be safe than sorry the developers usually put all attributes in the toString method. The aggravating problem thus is when the objects presented have deeply nested value object structures within; or, if they are in a collection; resulting in object details occupying pretty much the whole of real-estate in the debug log file leaving marginal space for the trace of program flow. Though the modern logging frameworks already do provide a certain level of coarsening with each emitted line bound to a log level such as DEBUG, INFO, the object data stays fairly high as toString method is not log-level sensitive.

Issues at a glance:

Recently i was in a situation; where the voluminous logging seemed seriously intimidating and the goal was To control the amount of information logged from an object in correspondence with logger level. I surmised that the following issues are blocking doing this objective.
  1. The objects typically have the toString method statically available to print one-and-only one form of string representation of the object irrespective of levels of fine-grain-ness desired.
  2. The loggers that we use (such as Slf4j with log4j for eg) always use this implicit toString method of an object in its core logging methods

Strategy:

The strategy to solve this issue is as follows
  1. Define an enum LevelOfDetail to indicate different levels of details to be logged such as brief, medium, none and all for eg; which could be easily mapped to log levels.
  2. Define an interface say for eg: LevelledToString having a method toString(LevelOfDetail) that can respond to desired levels of detail with appropriate amount of information on the object. 
  3. Make value objects implement LevelledToString (Get this done either manually or take help of annotations and code-generators such has eclipse xtend)
  4. Next build a flavour of Slf4j's Log4jLoggerAdapter such that the logger actually calls toString(LevelOfDetail)on LevelledToString instances (even in case of nested/collection) rather than the implicit toString method for logging value objects.
  5. Glue this new logger adapter using standard mechanisms as defined/documented by Slf4j such that clients of  existing Slf4j logger are oblivious to the use of this new replaced logger.

Sample:

While the other below sections provide some insight into what has been done here is a quick sample. The level-log-sample is housed in slf4j-custom-logger repo. Please remember this is not a maven module under slf4j-custom-logger and hence the level-logger and level-tostring dependent modules need to be built first, installed locally and then used as follows in the instruction set.
  1. Say you are at  c:\workspaces folder
  2. git clone https://github.com/venkateshamurthy/slf4j-custom-logger.git
  3. cd c:\workspaces\slf4j-custom-logger
  4. mvn clean install { This installs level-logger, level-tostring to local-repository to say ${user.home}/.m2/repository
  5. Next cd c:\workspaces\slf4j-custom-logger\level-log-sample
  6. [this is an independent project (not part of the parent pom slf4j-custom-logger)]
  7. [Assumption: level-logger, level-tostring to local-repository in ${user.home}/.m2/repository. will need to move the re-usable dependencies to OSSRH subsequently.]
  8. mvn clean test exec:java -Dexec.mainClass="com.github.venkateshamurthy.util.logging.Sample" -Dexec.classpathScope="test"

Approach:

This section details a few interfaces and classes that are key to materialize the strategy mentioned before.

Reference : Please refer to the code at: https://github.com/venkateshamurthy/slf4j-custom-logger

Changes on the object:

  1. Come up with an enum LevelOfDetail and an associated interface LevelledToString  to use the same in a toString overloaded method
  2. Develop all value objects implementing LevelledToString - in case if you want to manually do.
  3. Develop an annotation @ToLevelledStringAnnotation that can allow definiton of what attributes to print at what level say for eg:(brief={"attr1","attr2"})
  4. Develop annotation processor that reads this annotation and regenerates the source code along with toString(LevelOfDetail) met/hod written

LevelOfDetail and LevelledToString:

The Level of Detail can be assumed to be an enum that may or may not implement an interface such as :

As an example to this interface (constructors/setters left out for brevity):

Changes on Logger:


  1. Develop a new CustomFormattingLog4jLoggerAdapter (and may be an abstract class to hold common methods)
    1. Abstract class to hold common methods
    2. concrete class to deal with custom Message formatting
  2. Wrap the existing 
    1. SLf4j MessageFormatter into an adapter interface (IFormattingTupleFactory)
    2. FormattingTuple into IFormattingTuple implementation  just so we could add additional derivatives . (For eg: when we need a customized to string format in the message formatting)
  3. As documented by slf4j;
    1.  define a customized logger adapter factory bound to ILoggerFactory and have this customized factory produce instances of CustomFormattingLog4jLoggerAdapte
    2. Add StaticLoggerBinder in org.slf4j.impl package and bind  customized logger adapter factory so that clients of SLF4J loggers are transparently handed the new logger

Summary:

In summary; the logging needs have grown beyond rudimentary level based logging such as debug, info. The current times demand for log-level sensitive printing of object attributes just at the tap of annotation declaration say such as @ToLevelledString(brief={"attr1","attr2"}) on an class containing several attributes including say attr1, attr2. 

In this experiment; i was able to develop a customized logger (along with other slf4j eco-system classes) and the interface for objects to implement dynamic toString. In the near future would experiment with some annotation libraries to get the automation of toString(LevelOfDetail) generation and update this post.

Details:

The New Log4jLoggerAdapter:


The new flavour of Log4jLoggerAdapter can take a form an abstract class implementing the very same MarkIgnoringBase class which even Log4jLoggerAdapter also extends to hold most mundane/ repeated/boiler-plate code leaving the interesting toString handling to a small concrete class.

The abstract class AbstractLog4jLoggerAdapter has one abstract method called doLog which consumes a Slf4jLevel (this again is a enum representation of different log levels of Slf4j as Slf4j doesnt natively provide an level enumeration.)

   /**
    * This method does Level enabled formatting and logging.
    *
    * @param slf4jLevel
    *           the {@link Slf4jLevel} passed
    * @param format
    *           message format in the standard slf4j marker format {}
    * @param args
    *           to be passed to the logging
    */
   protected abstract void doLog(final Slf4jLevel slf4jLevel, final String format,
         final Object... args);


Custom adapter (CustomFormattingLog4jLoggerAdapter):

The concrete class that extends above doLog method requires few supporting classes/interfaces to achieve the level based printing of object attributes. The Slf4j uses a helper class called MessageFormatter to read the message pattern and an argument array to create instances of FormattingTuple which further would be used to actually log. The logic to convert an object to an appropriate format/style is the responsibility of MessageFormatter.

However there is a slight catch that both MessageFormatter and FormattingTuple dont implement an interface say unlike Log4jLoggerFactory which is crucial to have extensibility and pluggability of different formatters for example. To circumvent this; we could take refuge to superpose an interface called IFormattingTupleFactory and IFormattingTuple and write a small adapter to wire the existing MessageFormatter and FormattingTuple as adaptees and use it as follows (Check the doLog method).

We could call this adapter ass CustomFormattingLog4jLoggerAdapter; just so to indicate  more specialization on customized string representation during logging.  

/**
 * The Class CustomFormattingLog4jAdapter allows a custom {@link IFormattingTupleFactory formatting
 * tuple factory} to be used to provide a customized string representation in Log statements.
 */
import org.apache.log4j.Level;
import org.apache.log4j.LogManager;
import org.slf4j.Logger;
import org.springframework.util.ObjectUtils;

public class CustomFormattingLog4jLoggerAdapter extends AbstractLog4jLoggerAdapter {

:
:
:
   @Override
   protected final void doLog(final Slf4jLevel slf4jLevel, final String format, final Object... args) {
      Level level = slf4jLevel.getLog4jLevel();
      if (logger.isEnabledFor(level)) {
         Level effectiveLevel = level == Level.TRACE ? effectiveTraceLevel() : level;
         IFormattingTuple ft;
         if (args != null && args.length == 1) {
            ft = args[0].getClass().isArray()
                 ? tupleFactory.arrayFormat(slf4jLevel, format, ObjectUtils.toObjectArray(args[0])) 
                 : tupleFactory.format(slf4jLevel, format, args[0]);
         } else if (args != null && args.length == 2) {
            ft = tupleFactory.format(slf4jLevel, format, args[0], args[1]);
         } else {
            ft = tupleFactory.arrayFormat(slf4jLevel, format, args);
         }
         logger.log(FQCN, effectiveLevel, ft.getMessage(), ft.getThrowable());
      }
   }
}

The CustomFormattingLog4jLoggerFactory


The CustomFormattingLog4jLoggerFactory is a simple implementation that just creates our logger.


The FormattingTuple and its factory 


The formatting tuple is adapted to an interface IFormattingTuple just so that any future variation in that can be dealt with differing implements of IFormattingTuple . Similarly the factory to produce the IFormattingTuple.

Glue class StaticLoggerBinder

Beyond these the glue class StaticLoggerBinder will need to be placed in org.slf4j.impl package to get the necessary glue working.







Monday, July 13, 2015

A tryst with dozer custom converters

My experiments with dozer

Introduction

The object to object mapping activity happens to constitute a significant part during a front tier services hand-off of a request to the down-stream layers (be it DAO or another service). The major overhaul comes across as a substantial pain of using naive source getters and target setters in building the target objects of interest and confounds the developer and the maintainer alike. The often repeated transformations (during the mapping) such as formatting, composing, splitting, target attribute/field substitution are much left to the user's wisdom to handle in some non-descript isolated pockets of code. An ubiquitous example could be a phone number, or an address where different flavors of structures exist to cater to different locale. For instance in one flavour the road and the block name/numbers are captured in fields called address1 and address2 while an another flavor refers the same as street1/line1 and street2/line2 respectively. Further a state field in the address object can be referred to as province and zip can be substituted for pin.  

Intent

The intent of reminiscing on this article and the work underneath it came about when this above stated problem started quickly getting out of hand in a recent situation at work.It was when the number of different objects to be mapped to one or the other type started to exceed beyond handful (say more than 25) where the semantics of the source and target objects remained same however the field names started out to be different(just like address1 and street1). It was an occasion where a classic remote proprietary service needed to pass a request object to a target type with many attributes synonym-ed to a different names. 

The first thing that crossed my mind was to keep off from the lazy habit of jumping on the bandwagon of getters and setters(these days usually any half-good POJOs comes at-least with the gets, sets and default no-arg constructors without much effort). Ofcourse I am not whining at all on the POJO principles here; but just felt that life is too short for coding with POJO gets/sets and thus desired to have a bit more atop that can take care of mundane stuff in a declarative fashion than to just code. Infact this is the theme that is closer to my heart in embracing the project lombok annotations.

The real intent / reason on this article is by no means a rant on bean-bean mappers but as is explained further in the use cases section it is more about a few special situations that called in for a slightly general solution to be built forming the crux of this article. Please read further to know more on this.

Why dozer

Thus dawned a bright occasion on a winter morning of November 2014; where we decided to do some soul searching and figure out whats the best tool that is worth its salt for fitting the purpose we had. Tools / libraries such as commons convert from apache commons and spring libraries were looked upon as first choices for those libraries usually form the motherboard of many of the projects that we build. While the main aim of this transformation was about developer simplicity in not having to code and keep it mostly declarative; the approach was not desired to be entirely bereft of the coding. 

So while commons / spring approaches needs still a fair amount of hand cranked code for every conversion type; the search continued on beyond the comfort zone of apache commons, spring . The web has many articles of comparison on bean-bean mapping and  thus we will not go about comparing one library against another here. However on one such occasion when we chanced upon dozer; it sounded promising with its impressive feature set such as declarative xmlized way of defining the mapping and a great hook for plugging in custom converters that can be sewn together by using spring beans. After a quick check within organization for the yearned buy in; we quickly set out to find out the various bells and whistles it offered. 

Use-cases

Most of the mundane mappings are handled by simplistic sensible defaults applied by dozer configurations.  For instance the declarative xml pieces for a mapping did'nt needed any explicit mention of fields that are commonly available in source and target classes. Readers are encouraged to refer web articles on dozer to glean information. However, there were some peculiar use-cases that were frequented by us that couldn't be found in either dozer documentation or web and essentially are the reasons for opening this blog post.

These specific use cases were usually related to

1.     An immutable/constant object or value to be substituted to a target field
2.     In another case ; the target objects had one nagging pattern in that many of the fields such as address, phone were always requiring to be of list type(POJO generators for these third party target objects were designed to emit these fields as list and hence we had no choice). This usually meant such target fields not only had to be mapped but also need to be encapsulated to a list.

These specific use cases that needed building different custom converters purposed for as follows:
1.     A field of target class that needs to be substituted by a type specific constant (mostly the primitives and date). Here the particular target field is simply substituted with a constant.
2.     A field of target class that needs to be substituted by a pre-defined bean in the spring beans xml. Here the particular target field is a complex custom type which is a bean defined with appropriate values for its attributes.
3.     A target field that needs to be populated as java collection type (for now List type is only supported). Here the source bean needs to be converted to a target type and further added as an element into a collection of the target type.

Implementation

A custom converter in dozer provides the following configurable keys (amongst many others) which could be utilized for the purpose at hand.
1.     custom-converter-id : This is bean id straight out of a bean definition in the spring based beans.xml that is used in the project.
2.     custom-converter-param : This is basically a helpful hint or a string parameter which is set post construction of the converter

A first implementation of the primitive converters, pre-defined bean substitutors and a source to target list mapper can be found in my github repository : dozer-experiments.

The first step in before building up the solution was to firm up the project based on spring and get up quickly with a workable beans.xml for the purpose. 

Note:The rationale for adding beans.xml and even dozer bean configuration files to src/test/resources and primarily making test package as the go-to source for understanding is to keep the actual main classes light since the usage and configuration of converters can become very contextual. Basically the beans or dozer files primarily are data driven and are best served with test data in a demo environment rather than making them as part of main jar file. The readers are encouraged to understand READMEtest package, configuration samples and accordingly make up their configurations.

A note on content of beans.xml
Normally the spring based project i write would usually require some quintessentials listed below.
1.     Need my beans xml to be sleak and hence require spring p-namespace and c-namespace.These allow p:<property-name> and c:<attribute-name> in the bean definition as an xml attribute.(half sanity may survive just because of this itself :) - as otherwise beans.xml too busy!).
2.     Spring Annotation Processors(@PostConstruct etc) are usually made use of along with a few defalt values used from bean-default.properties so the following snippet.(I dont like repeating package names or base path names which increases the breadth and/or height of xml)
3.     Using Slf4j logger (Please refer to logback.xml for log level and formats) and dozer related configurations in the beans.xml as follows.
<bean
class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor"
scope="singleton" />
<bean
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"
scope="singleton">
<description>This is a common properties file that may declare
repetatively usable strings, constants and class/package names etc
</description>
<property name="location" value="bean-default.properties" />
 </bean>
<bean id="system-properties-setup"
 class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"
 p:targetObject="#{@systemProperties}" p:targetMethod="putAll">
<property name="arguments">
<util:properties>
<!-- <prop key="java.util.logging.SimpleFormatter.format">%1$tF %1$tT
[%4$s]: %5$s %n</prop> -->
<prop key="dozer.configuration">dozer.properties</prop>
<prop key="dozer.debug">false</prop>
</util:properties>
</property>
</bean>


The rest of the beans project / problem specific:
1.     Converters of primitive type: These deal with turning a string parsable value to a specific primitive type such as int, long, short etc. It can also support for date today with an assumed simple date format. These as it sounds are going to substitute a value specified in (as part of dozerBeanMapper.xml) the attribute custom-converter-param. The converter itself is as specified in custom-converter-id.  Please refer to README for details.
2.     Converters to substitute a pre-defined target bean : A fully defined target bean can act as a stand in while mapping some fields. 
3.     A source to target list converter: It is self explanatory.
4.     The dozer mapper definition is completely done in beans xml. so its imperative to use this than create some default way for the mapper. 
Most of the bean ids have self explanatory meaning and hence will not be explaining all beans here.

While there exists a good reference on bean mapping within the dozer user guide; here an attempt is made with a small snippets xml to explain the solution for the use cases addressed. The example objects/classes for conversion are declared in dozer bean mapping file housed in the examples package for demonstration purpose.

For instance ;a quick way to get up with an use of substitution of constant values is as follows; where the custom-converter-param has the actual value to be converted to desired primitive type.

<mapping map-id="source-to-target-with-few-constants">
  <class-a>com.github.venkateshamurthy.dozer.converters.examples.SourceHolder</class-a>
  <class-b>com.github.venkateshamurthy.dozer.converters.examples.TargetHolder</class-b>
  <field map-id="source-to-target"><a>source</a><b>target</b></field>
  <!-- the custom-converter-ids are primitive converters but valueas are strings to be converted -->
  <field custom-converter-id="to-date" custom-converter-param="1947-08-15"><a>dob</a><b>dob</b></field>
  <field custom-converter-id="to-int" custom-converter-param="24"><a>age</a><b>age</b></field>
  <field custom-converter-id="to-double" custom-converter-param="500000.0"><a>salary</a><b>salary</b></field>
  <field custom-converter-id="to-long" custom-converter-param="100300500700"><a>uan</a><b>uan</b></field>
 </mapping>

Where as a pre-defined bean substitution can be had by using the following snippet:

<mapping map-id="source-to-substitute-target">
  <class-a>com.github.venkateshamurthy.dozer.converters.examples.SourceHolder</class-a>
  <class-b>com.github.venkateshamurthy.dozer.converters.examples.TargetHolder</class-b>
  <!-- the custom-converter-param values are prefixed here with bean:bean-name as the target bean for substituting must be defined beans.xml -->
  <field  custom-converter-id="source-to-target-pre-defined" custom-converter-param="bean:pre-defined-target">
   <a>source</a> <b>target</b>
  </field>
 </mapping>

Note: The custom-converter-param has a mandatory prefix bean:  and suffixed with an actual bean id from the bean xml representing the pre-defined target bean. This structural constraint of prefix bean: is mandatory and is a personal convention

The other type is about source to target list type and requires a proper mapping to exist between a singular element of source class to target class. The snippet as follows as an example:

<mapping map-id="source-to-target-list" > 
  <class-a>com.github.venkateshamurthy.dozer.converters.examples.SourceHolder</class-a>
  <class-b>com.github.venkateshamurthy.dozer.converters.examples.TargetListHolder</class-b>
  <!-- the converter is source to a target list and custom-converter-param values are prefixed here with map-id: as the singular element mapping is required in dozer bean mapping definition -->
  <field  custom-converter-param="map-id:source-to-target" custom-converter-id="source-to-target-list">
    <a>source</a> <b>targets</b>
  </field>
 </mapping>

Note: The custom-converter-param has a prefix map-id: and suffixed with an actual mapping id from the dozerBeanMapping xml (or some included dozer mapping config xml file). This basically makes sure to apply the correct mapping between singular element of source and target class and then subsequently return a list wrapping/including the singular target element. This structural constraint of prefix map-id: is mandatory and is a personal convention.

Usage:

In this section a few test code snippets are mentioned as follows. The reader can take que from this in order to make use of the new converters proposed here.

/** test a predefined target substitution */
 @Test
 public void testPreDefinedTargetSubstitution() throws NoSuchFieldException,
   SecurityException {
  SourceHolder sourceHolder = ctx.getBean("source-holder",
    SourceHolder.class);
  TargetHolder targetHolder = mapper.map(sourceHolder,
    TargetHolder.class, "source-to-substitute-target");
  Target expectedTarget = ctx.getBean("pre-defined-target", Target.class);
  assertEquals(expectedTarget, targetHolder.getTarget());
 }

 /** test a field (of type date/int/long etc) for a constant substitution */
 @Test
 public void testTargetWithFewConstants() {
  SourceHolder sourceHolder = ctx.getBean("source-holder",
    SourceHolder.class);
  TargetHolder targetHolder = mapper.map(sourceHolder,
    TargetHolder.class, "source-to-target-with-few-constants");
  log.info(targetHolder.toString());
 }

 /** test a singular element to a destination list type */
 @Test
 public void testTargetList() {
  SourceHolder sourceHolder = ctx.getBean("source-holder",
    SourceHolder.class);
  Source source = sourceHolder.getSource();
  TargetListHolder targetHolder = mapper.map(sourceHolder,
    TargetListHolder.class, "source-to-target-list");
  List<Target> targets = targetHolder.getTargets();
  assertFalse(targets.isEmpty());
  assertTrue(targets.size() == 1);
  Target target = targets.get(0);
  assertEquals(source.getAddress1(), target.getStreet1());
  assertEquals(source.getAddress2(), target.getStreet2());
  assertEquals(source.getPhone(), target.getPhone());
  log.info(targetHolder.getTargets().toString());
 }

While test code helps in understanding the usage; a good look at the actual converter classes in the main package helps deepen the understanding.

Summary:

In summary, though a set of peculiar use cases of bean to bean mapping are discussed within a context, it is largely felt that the general solution discussed here seems to be useful in a variety of similar situations arising in different contexts. This set of converters are set only to grow with different levels of customizations /enhancements applied to it . Thus I request earnestly all the readers to express their opinion/reviews on this to make this article and code a better experience for the users.