Saturday, December 25, 2010

Continuous Integration Step 2 – FxCop Integration with MsBuild

An FxCop project specifies the set of assemblies to be analyzed, the rules used to analyze the assemblies, and the most recent analysis results reported by FxCop.
Local Path of FxCopCmd.exe on my build server is for example:
C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe
Challenge is to write a custom target for code analysis, plug that target in build process to perform analysis as last task and then produce the report in xml file.  So below the After Build target in previous post, I wrote following script:
  <PropertyGroup>
    <BuildDependsOn>
      $(BuildDependsOn);
      Bash_CodeAnalysis
    </BuildDependsOn>
  </PropertyGroup>
   
  <Target Name="Bash_CodeAnalysis">
      <Message Text ="Static Code Analysis Starts Here"  Importance="high"> </Message>
  </Target>

Build and verify that message appear at the end of “After Build” target execution. The code above is very simple that is, run Bash_CodeAnaysis target after standard build. Now I can write Exec Command afterwards.

<Target Name="Bash_CodeAnalysis">
      <Message Text ="Static Code Analysis Starts Here"  Importance="high"> </Message>
  
   <Exec Command="&quot;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe&quot; /searchgac /rule:&quot;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\Rules&quot; /file:$(ProjectDir)bin /out:C:\fxCop.xml" 
          ContinueOnError="true">
    </Exec>
</Target>

Note: Prefer to write Command value in single line otherwise command line tool will consider it separate commands and throws an error.
Parameters details that are passed in:
/searchgac – Tells FxCop to search the gac for any referenced dll's not found in say the bin folder.  If you forget this FxCop will error out.
/rule:Rull.dll – You can specify as many rules as you like to check against your compiled dll just repeat this command with each of the Rules you like.  Unless you have a specific FxCop project you must specify at least one rule. You can specify Rules folder as well
Note: As we are calling it from an MsBuild file which is XML we have to encode the quotes as “&quot;” otherwise famous 9009 error will be encountered.
/file:YourCompiled.dll – Much like the rule parameter you can specify as many .dll’s as you would like to check but need at least one. I specified project’s bin folder means all dlls to be analyzed in it.
/out:someXml.xml – You are going to want to specify an XML file to output the results to so that you can integrate the results in your build log.   CruiseControl.net uses XSL to transform the generated XML into HTML for viewing in its Web Dashboard and auto generated Emails.
ContinueOnError – Unless you have written your own FxCop rules you are most likely going to want to set this value to true so that your build doesn’t fail each time you break one of the many rules that FxCop specifies.

Here is another example analyzing single NamingRules.dll on single dlls only.
<Target Name="FxCop" DependsOnTargets="BuildProject> 
<Exec Command="&quot;C:\Program Files\Microsoft FxCop 1.36\FxCopCmd.exe&quot;
 /searchgac /rule:&quot;C:\Program Files\Microsoft FxCop 1.36\Rules\NamingRules.dll&quot;
 /file:C:\Project\bin\My.Framework.dll /out:fxCop.xml" ContinueOnError="true">
    </Exec>
</Target>

Now save the project file.
Open VS2010 command line tool again and browse to the folder where csproj file exists. Execute  this command:
 >>> MSbuild GD.AutoDist.MainService.csproj /fl
After build, static code analysis is performed by FxCop as follows.


Now next is to dig into CCNet  configurations :)
Provide your comments

Wednesday, December 22, 2010

Continuous Integration Step 1 – MsBuild Integration


How to start a Continuous Integration process is a difficult task. When things are up and running with Automated Process, it seems very helpful and easy to maintain. With minimal human interaction you can reduce your build time and accuracy and hence time to market the application. A great advantage.
There are a large set of options and combination of tools. We prepared a dedicated Build server machine in the company (TEO). Here is a set of software components on Build Server, I will discuss about each component’s wiring later on (decision about set of chosen package is based on our defined criteria including wide acceptance, open source availability and our needs).

MsBuild Integration with csproj file:
Our target is command line building of CSharp project  using MSBuild. I created an asp.net website in VS2010 and want to configure it to execute from command line tool. One goal is to copy the build files to root drive destination folder (also configured in metadata dynamically). So I unloaded the project file in VS and right clicked to go to “Edit project file”.  It opens the project file for editing.

At the end of file we can find ‘Before Build’ and ‘After Build’ commented targets. In the after build target we will write custom code.


Here is my code snippet instead:

  <!--MSBuild integration started here by Bash 20-Dec-2010-->

  <Target Name="AfterBuild">
    <Message Text ="Custom After Build Action Started" Importance="high"></Message>
    <ItemGroup>
      <ProjectFolder Include="$(ProjectDir)\*.*"                    Exclude="$(ProjectDir)\*.cs;$(ProjectDir)\*.csproj;$(ProjectDir)\*.user" >
        <!--Only imediate file of project folder-->
        <PublishTo>2010_MSBuild</PublishTo>
      </ProjectFolder>

      <BinFolder Include="$(ProjectDir)bin\**\*.*" Exclude="*.cs">
        <!--All files and sub folders in bin-->
        <BinFolderTo>2010_MSBuild/bin</BinFolderTo>
      </BinFolder>

      <AccountFolder Include="$(ProjectDir)Account\**\*.*" Exclude="$(ProjectDir)Account\**\*.cs" >
        <!--All files and sub folders in Account-->
        <AccountFolderTo>2010_MSBuild/Account</AccountFolderTo>
      </AccountFolder>

      <ScriptsFolder Include="$(ProjectDir)Scripts\**\*.*">
        <!--All files and sub folders in Scripts-->
        <ScriptsFolderTo>2010_MSBuild/Scripts</ScriptsFolderTo>
      </ScriptsFolder>

      <StylesFolder Include="$(ProjectDir)Styles\**\*.*">
        <!--All files and sub folders in Styles-->
        <StylesFolderTo>2010_MSBuild/Styles</StylesFolderTo>
      </StylesFolder>
    </ItemGroup>

    <Message Text="ProjectFolder :@(ProjectFolder)" Importance="high"/>

    <Copy SourceFiles="@(ProjectFolder)"  DestinationFiles="@(ProjectFolder->'%(RootDir)%(PublishTo)\%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(BinFolder)"  DestinationFiles="@(BinFolder->'%(RootDir)%(BinFolderTo)\%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(AccountFolder)"  DestinationFiles="@(AccountFolder->'%(RootDir)%(AccountFolderTo)\%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(ScriptsFolder)"  DestinationFiles="@(ScriptsFolder->'%(RootDir)%(ScriptsFolderTo)\%(Filename)%(Extension)')" />
    <Copy SourceFiles="@(StylesFolder)"  DestinationFiles="@(StylesFolder->'%(RootDir)%(StylesFolderTo)\%(Filename)%(Extension)')" />

  </Target>

Now save the project file.
Open VS2010 command line tool and browse to the folder where csproj file exists. Execute  this command:
 >>> MSbuild GD.AutoDist.MainService.csproj /p:Configuration=Release
1.       MsBuild – Standard command
2.       Cs project file to be built
3.       /p is parameter with key-value pair; key= configuration and value=release
After build, the files will copy to root’s 2010_MSBuild folder.

Now next target is to analyze the code by Static Code Analysis. FxCop is the goal for integration. 

Sunday, December 19, 2010

Step 0 - How to start Continuous Integration Process

Continuous Integration
 (continued in futuer Posts)
Continuous Integration is the practice of integrating early and often, so as to avoid the pitfalls of "integration hell". The ultimate goal is to reduce timely rework and thus reduce cost and time to market an application.
How to start process in Organization
It was started with an idea that company should get maximum benefit from Continuous Integration. But very first challenge is how to start it? There are a number of tools and each with different capabilities and scripting code need to understand. On other hand it was a challenge that company run verity of .NET languages, with almost all flavors of .NET frameworks and all type of applications from small projects to long term maintenance projects. It could be enterprise applications, web-based, windows/form-based and network applications etc. We are using verity of source code repositories like TFS, VSS, some use SOS because their code ownership is on client premises. Some use VPN connection to do remote coding and does not interact with our local network. DMZs and further considerations etc.
Our lead then decided to develop a plan for this project. A road map to define the milestones at each level.

-          Define Mission Statement
-          Define Tasks
§  Value Addition
§   Success Criteria
The Process
We wanted to create a simple running process that is
-          Should not involve extra cost to company (should be mostly open source  or in place already in the company)
-          Should be easy to  adapt to various teams in the company
-          Should have minimum effort involve to pluggin the process in existing projects and new coming projects (Avoid unnecessary scripting, coding and configurations)
-          Should use well tested and stable set of tools so that running maintenance cost of process should be less.
-          Should perform less customization, until its necessary and first focus on start a process.
-          Once up and running in place, then enhance gradually by adding more areas/tools etc.
Execution Plan
Defined tasks and their breakdown
Start Survey across Organization
We started survey across the organization. Purpose was to collect necessary information to define the feasibility criteria, tools selection and maturing the process.


(To be continued ...)

Wednesday, December 15, 2010

WCF Hosting – Issues with VS2010 and Fresh IIS 6

I need to deploy a WCF web service to test environment. Web service is developed with VS2010 (.Net 4.0). When I prepared a build of 28 projects with single service using them. I just compiled and build the service with option ‘Any CPU’ and copy to test machine.

Test machine is Windows 2003 server x64 bit using IIS6. When I registered wcf service I figured out the ASP.Net is not registered with IIS.

Action: I run command >> aspnet_regiis –i
Next error appeared in browser was The Error 404 “Page not found” …..

On enquiry, I found that I should check the detailed sub option from log under 404. So I get to IIS->Web Sites->Propertie -> Log Properties and traced log file path. Then I opened log file where following last entry was there:

2010-12-14 04:03:30 W3SVC1 127.0.0.1 GET /GD.AutoDist.MainService/AutomatedService.svc – 80 – 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+
.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 2 1260

What is 404.2. Here is a description:

404 – Page not found.

  • 404.0 – (None) – File or directory not found. ·
  • 404.1 – Web site not accessible on the requested port.
  • 404.2 – Web service extension lockdown policy prevents this request. ·
  • 404.3 – MIME map policy prevents this request.

So my issue is “Web service extension lockdown policy prevents this request.”. means my policy is preventing the ASP.NET 4.0 ISAPI extension from processing your request. It happened because check it out:


Starting from “0″ the last line means v4.0 aspnet_isapi.dll is disabled? There’s your problem. So to enable it I tried it:


Now when I tried to access WCF service I get following Error:

The type ‘xxxxxxxxx’ provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found.
And here is Exception details:

System.InvalidOperationException: The type ‘GD.AutoDist.MainService.AutomatedService’, provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found.
I googled the issue and found perhaps its good idea to add following line under services tab in web.config:

<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled=”true” />
….

Next error was as follows:

Server Error in ‘/’ Application.
Parser Error Message: It is an error to use a section registered as allowDefinition=’MachineToApplication’ beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS.

Till now here is my IIS virtual directory settings: (Above error is saying I should create it as application)



When I created Application by clicking create button above, afterwards I get a new error:

Server Error in '/GD.AutoDist.MainService.AutomatedService' Application.
Could not load file or assembly 'GD.AutoDist.Server.Common' or one of its dependencies. An attempt was made to load a program with an incorrect format.
To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.


I did above thing in registry and now I can see the error log in detail on webpage. But other than that what does this means: one of the dll ‘GD.AutoDist.Server.Common’ in my solution is creating a problem. But what’s it. I rechecked the bin folder and common dll lonag with all its dependencies was there. Then I realized, its 64 bit operating system, why not to check the build option for common library in VS2010. And I figured out that its x86 instead of ‘Any CPU’. So I changed it rebuilt the dlls and placed in deployment folder. (I also ensured that all the dlls are using Framework 4.0) Finally the issues are over. Now I can start wcf service in browser



That’s it:)

Tuesday, November 23, 2010

SQL Server Restore error working with backup file (.bak) – Operating system error 21 (The device is not ready)

When I create backup database (.bak) from production database server and restore in development server. An error may occur when I restore it using the SQL Server management studio.
The error is like this: "System.Data.SqlClient.SqlError: Directory lookup for the file "...rap.mdf" failed with the operating system error 21(The device is not ready.). (Microsoft.SqlServer.Smo)"

The backups store the path of each file that was backed up, by default a restore will restore these files to the same path. The trouble comes when restoring on a different machine where this path does not exists.

We should check following options before going further:

  • Should create a new database at known location (of ldf & mdf files)
  • make  the overwriting = true when restore
  • In restore as option - provide correct file paths(same as of  created ldf & mdf files)

Monday, May 3, 2010


Using SQL Server Profiler – Tracing EF and SQL queries
SQL Profiler is a sometimes helpful figure out what the applications are doing on the wire. So open SQL Profiler and start with File | New Trace. It will ask to which database you’d like to connection and then pop-up the Trace Properties window. If you click on the Events Selection tab, you can select the events you see.

For tracing EF, make sure RPC:Completed events  is present and also that the TextData column is checked for RPC:Completed event.
 
For SP and dirct query execution tracing, make sure SQL:BatchCompleted and SQL BatchStarting  events  are present and also that the TextData column is checked for them.
When you press the Run button, you’ll see a rolling list of calls made to that instance of SQL Server. Now when you run an EF application and it will trace the activites in SQL server for EF.
Tracing a particular Query Token (either executed from EF or TSQL)
For Tracing a particular token like in my case a store procedure named ‘GetPricing’. So the easiest thing to do is to press the Column Filters button and set the Filter on column ‘TextData’

Now write a query containing ‘GetPricing’ token and execute in query editor.   Or execute the stored procedure containing similar SP Name.
Here is the output:
Nice tool  for  traceability


Sunday, February 21, 2010

Enhancements in Linq-to-Entities 4

Most of the projects here at Mondo Islamabad are closely coupled with Relational databases. At the back end, these applications must interact with the data represented in a relational form. One-to-one database schemas are not a good choice for large and enterprise applications. The domain/conceptual models of applications mostly differ from the logical models of databases. The Entity Data Model (EDM) is a conceptual data model that can be used to model the data of a particular domain so that applications can interact with data as entities or objects.
Through the EDM, ADO.NET exposes entities as objects in the .NET environment. This makes the object layer an ideal target for Language-Integrated Query (LINQ) support. Therefore, LINQ to ADO.NET includes LINQ to Entities. LINQ to Entities enables developers to write queries against the database from the same language used to build the business logic. The following diagram shows the relationship between LINQ to Entities and the Entity Framework, ADO.NET, and the data source.

EF Enhancements in .NET 4 / Visual Studio 2010 Beta 2

Microsoft will launch Visual Studio 2010 on March 22, 2010. With the release of .NET 4 / Visual Studio 2010 Beta 2 last month, it brought a couple of favorite “data” related features in Entity Framework (EF).
1)    Pluralize Entity: The Entity Framework (EF4) supports the ability to the ability to pluralize entity sets and singularize entities. This means when I refer the table the table will be called “Categories” but the actual class that I work on in my application is a “Category”.
2)    FK Association: The EF4 supports the ability to include “foreign keys” in the data model. In the first version of the Entity Framework if we have a “Category” table that contained a CompanyId which referenced a record in the “Company” table the generated Entity Framework model would convert the CompanyId into a navigation property which would drill directly in the Company table. While this simplified some scenario’s it made web scenarios more difficult.  We felt problems during the translation of LINQ entities to business data transfer objects (DTOs). We should have to write the smart translator in order to perform these conversions.  We have to write the setting logic of entity keys (ugly code) while updating just CompanyId in the Category record. EF4 now includes the ability to have foreign key columns added directly to the entity. It became flexible by allowing both the options to either set the foreign key (CompanyId) or the navigation property (Category.Company) making disconnected entities scenarios easier to program in N-tier and enterprise applications.
3)     EnableDynamicData Extension Method:  With VS2010 Beta 2 they have added a new feature of Dynamic Data which can be enabled on our data controls like this:

    ListView1.EnableDynamicData(typeof(Category))

This one line of code will automatically bring many of the features that Dynamic Data provides:
  1. Automatic validation
  2. Support for Data Annotations on objects to control validation and display properties
  3. Support for field templates for customizing UI behavior based on data type
This should allow any developer to utilize the power of Dynamic Data without radically changing their application or requiring Linq to SQL or Entity Framework.

LINQ to Entity 4.0, WCF and Disconnected Entities

We have used entity frameworks in the centre’s projects more than one a year.  It best fits for N-tier applications to be used along with WCF. The architecture is almost the same as it should be for an N-tier application. Client side is Win Form, WebPages or Silverlight. After client layer, we have WCF service proxy which decouples the client from WCF Service logic. Off course I have used WCF to make it multitier/enterprise application, remotely accessible to provide services to the client tier. Service layer uses Business Layer to handle the Data Access from database.  Data Access Layer is EF4 and these entities are translated into serializable Business Objects and are passed over all the layers till client as DTO.



I wonder sometimes that there is a no need to provide an abstraction to EF4 as it is already on such a high level. You can see the code I have written in EF4 using VS2010. At the business layer, how simple it is to write the pseudo code.

a)      Create a new Category and FK Association to an existing Company by setting the FK Property directly:
using (var context = new EzDoxEntities())
{
    //Create a category and a relationship to a known company by Id
    Category c = new Category
    {
        Id = 1,
        Name = "Beverges",
        CompanyId = 7
    };
    //Add the product (and create the relationship by FK value)
    context.Categories.AddObject(c);
    context.SaveChanges();
}


b)     
Create a new Category and a new FK Association to an existing Company by setting the reference instead:

public void CreateCategory () 

    using (var context = new EzDoxEntities ()) 
    { 
        //Create a new product and relate to an existing category
        Product p = new Product         { 
            ID = 1, 
            Name = "Bovril", 
            Company = context.Comapnies.Single(c => c.Id == 7) 
        }; 
        // Note: no need to add the category into context, because relating   
        // to an existing company does that automatically.        
       // Also notice the use of the Single() query operator       
       // this is new to EF4 too.        
       context.SaveChanges(); 
    } 
}
c)      Update an existing Category without informing the Entity Framework about the original value of the CompanyId  through entity key: (it is a concept of stub entity)


public void UpdateCategory(Category editedCategory) 

    using (var context = new EzDoxEntities()) 
    { 
       // Create a stand-in for the original entity
       // by just using the Id. Of the editedCategory
       // -or- Create a stub entity and attach it.
       context.Categories.Attach( 
            new Category { Id = editedCategory.Id }); 


        // Now update with new values including CategoryID                       
        context.Categories.ApplyCurrentValues(editedCategory); 
        context.SaveChanges(); 
    } 
}

In this example "editedCategory" is a category which has been edited somewhere, this is exactly the sort of the code you might write in the Update method of an ASP.NET MVC controller, and is a great improvement over the code you have to write using Independent Associations in EF1.

The above code snippet Attach statement [var category = new Category { ID = editedCategory.ID }] creates a “Stub Entity”. A stub entity is a partially populated entity that stands in for the real thing.

d)      Deletion is so simple by using stub entity as


public void DeleteCategory(long categoryId)
{
        using (var context = new EzDoxEntities())
        {
            // Stub entity 
            var category = new DAL.Category { Id = categoryId };
            //Attach the new entity
            context.Categories.Attach(category);
            //Delete the entity now
            context.DeleteObject(category);
            context.SaveChanges();
        }
}
We remember the days when we used to implement the database layer using ADO.NET and then build the objects using the data or use the typed dataset instead. We were comfortable at that time to work with these approaches. But now ease of the use and maturity of EF4 has changed the perception altogether.  There is a very minimum use of stored procedures in data model and major logic is based on the integrated quires which are so simple and fast for maintenance. There is a slight performance degradation using EF4 as compared to regular ADO.NET data readers.  But again it depends on the situation and I can see a lot of abstraction and simplicity in the code and maintenance when using EF4.

References:


Monday, February 1, 2010

Using Metadata to query Tables and Columns views in SQL Server

The Information Schema views are part of the SQL-92 standard.  There are viewes like TABLES, COLUMNS that provides information about the tables and columns in a database.

You can write a query these viewes to get metadata of "all tables and columns in the database".
 Just open database SQL window and execute following query:

Select  TABLE_CATALOG as [DB],
        TABLE_NAME as [Table],
        COLUMN_NAME as [Column],
        DATA_TYPE as [Type],
        CHARACTER_MAXIMUM_LENGTH as [Length]
from information_schema.columns
where TABLE_NAME IN
(
    select  TABLE_NAME

    from information_schema.tables
    where table_type = 'base table'
)
order by TABLE_NAME

You will see following rows in result set:

Thursday, January 14, 2010

Develop Watchdog Timer

Summary...
Project nature is critical and I have a core P2P service (Windows service in vc++) running at backend. Requirement is that the service code should be taken care of by another watchdog (dog/timer used for guarding) application so that the original service ensures that it is always be in running in normal mode. Watchdog timer restarts the service if on the occurrence of some critical fault, or hang or service became meaningless by neglecting its assigned functionality supposed to do in normal operation.

It is another service with a timer of small interval which ensures following


  1. Is P2P Service running? If not, then start it.
  2. Is P2P Service hanging? If yes, kills it and restarts it. (This is decided if P2P Service is not consuming any CPU usage during the last X times and since the previous check. and/or CPU usage of P2P Service is aboove 80% constantly for X times) 
  3. Is P2P Service already running for X days? If yes, kills it and restarts it.
  4. Is Local IP address changed by the user? If yes, kills it and restarts it. (so that old IP addresses are removed from the tracker and fresh information is exchanged – other peers are able to make connections)
  5. Is new version of UI application and/or P2P Service downloaded by the session? If yes, kills it and install new version and then restarts it.
  6. Is P2P Service neglecting its normal operations/threads? If yes, , kills it and install (This is decided if P2P Service is not giving its benchmark signals to the Watchdog application for certain timeout and for certain times)

The Watchdog Service should maintain a log file of its actions (checking, restarting, killing) and extra information at the point of action, if any.

It should copies the log file of Core P2P Service to another specified location (same name, plus date and time concatenated), so that the information in it will be acrhived.

Tuesday, January 5, 2010

Base64 Encoding

Base64 Encoding

When text files are attached to SMTP emails, the text files can be attached in their plain text format. But binary files cannot be attached without some form of encoding. This encoding used in SMTP and many other Internet Protocols is called Base64 Encoding or simply MIME Base64.
 In VC++ binary data is stored in BYTE (unsigned char) which is Base256 (0 … 255) encoding and in not easily readable and printable. The base64 encoding use the character set encoding common to most environments and easy to understand/print.
Base64 uses 64 (0 … 63) characters where each character is using 6 bits.  The character set representation is as follows:
‘A’…’Z’ for (0-25),  ‘a’…’z’ for (26-51), ‘0’…’9’ for (52-61), ‘+’ for 62 and ‘/’ for 63 - making it 64 in total. ‘=’ character is used for padding in addition.
BYTE (unsigned char) Array vs. Base64 Encoded String
“I have a buffer as BYTE array. I want to convert it into human readable/printable form. I need data representation protocol and that is Base64 encoding. You can store this date in text, XML, or application configuration files.”
One character in C++ is minimum storage unit of 8 bits. A Base64 Encoded ASCII character represents 6 bits of Base2 (binary system).  So least common multiple of 8 and 6 is 24 which means
3 Bytes = 4 Base64 encoded characters = 24 bits in Base2
So the encoded value of SoS is U29T. Encoded in ASCII, S, o, S are stored as the bytes 83, 111, 83, which are 01010011, 01101111 and 01010011 in base 2. These three bytes are joined together in a 24 bit (24 = 8x3) buffer producing 010100110110111101010011. Packs of 6 bits are converted into 4 numbers (24 = 6x4) which are then converted to their corresponding values in Base 64.
Text
S
o
S
ASCII
83
111
83
Base2 Pattern (24 bits)
01010011
01101111
01010011
6 Bit Pattern
010100
110110
111101
010011
Index
20
54
61
19
Base64 Encoded ASCII
U
2
9
T

Above example shows that Base 64 encoding converts 3 uncoded BYTEs (simple ASCII characters) into 4 encoded ASCII characters. So Base64 encoded string is almost 4/3 (=1.33) times larger than that of corresponded simple ASCII string. The base64 encoding algorithm takes every three bytes of data and converts them into four bytes of printable encoded ASCII characters. If the size of the incoming byte array is not an exact multiple of three, the algorithm appends equal signs (one for each missing byte) at the end of the base64-encoded string. So there can be 0, 1 or 2 number of ‘=’ signs at the end depending upon the number zero 6-bits pattern found in last three-octet group of original string. This convention guarantees that the size of base64-encoded string will always be a multiple of four. The length of a base64-encoded string can be calculated as:
Base64 = ((Bytes + 3 - (Bytes % 3)) /3) x 4

where Base64 and Bytes indicate the number of bytes in the base64-encoded string and the original byte array respectively. You can use this formula to calculate the size of the column holding base64-encoded text.
For example, if a byte array contains 13 characters of the ASCII string "Hello, world!", the size of the corresponding base64-encoded string can be calculated as:

Base64 = ((13 + 3 - (13 % 3)) / 3) x 4 = 20 (bytes)

The resulting value will be "SGVsbG8sIHdvcmxkIQ==". The last two characters of the base64-encoded string contain two equal signs ("==") indicating the two missing bytes in the last three-byte block of the byte array.
 
Padding
The first case where you have one byte remaining, you should pad two additional bytes with all zeros onto the end of the binary sequence. You can then represent the one byte with two base-64 characters followed by two padding characters.
Let consider an example.
‘00000001’
Pad the single-byte instance with two more bytes of zeros.
‘00000001’ ‘00000000’ ‘00000000’
Now break up the binary sequence in sets of six bytes.
‘000000’ ‘010000’ ‘000000’ ‘000000’
Take the first two base-64 characters and pad two ‘=’ characters to the end of the sequence.
‘AQ==’
The second case is where you have two bytes remain.
‘00000010’ ‘00000001’
Here you should pad one additional zero byte to the end of the binary sequence.
‘00000010’ ‘00000001’ ‘00000000’
Now break up the binary sequence in sets of six bytes.
‘000000’ ‘100000’ ‘000100’ ‘000000’
We then take three base-64 characters and pad with one ‘=’ sign.
‘AgE=’
Line Length
To improve human readability in the stream the base64 specification requires that each line should be at most of 76 encoded base-64 characters in length. After each 76 characters, we should insert a carriage return and line feed (\r\n) into the stream. This increases the stream length by approximately 3%.
 
References:
Base64 on Wikipedia
 
How to Base64 by Randy Charles Morin
 
How to Calculate the Size of Encrypted Data?