git merge vs rebase

git merge vs rebase

Rebasing and merging are both designed to integrate changes from one branch into another branch but in different ways.

git merge vs rebase

If you have a feature branch,

REBASE: you want to rebase the feature branch onto master branch, you move the feature branch’s base to masters’. If you have conflicts, it will present conflicts per commit, so you may end up resolve conflicts multiple times. It completely rewrites the history.

before rebase, you have
before git rebase

after rebase, you have
after git rebase

MERGE: you want to merge the feature branch onto master branch, you take the content of commits and integrate onto master branch. It creates a new commit for the merge, and you can resolve the conflicts and add a commit message. It preserves the history.

before merge, you have
before git merge

after merge, you have
after git merge

If you work in a large development team, I recommend to use merge, as it preserves the history and everyone is aware of what had happened in the past. If you have a fairly complicated history and you want to squash the commits, then rebase is a good option to do so.

If you are new to git and this book would help you with useful tips and techniques to work better with your development team.

I also design developer memes on Tshirts & stickers.



git merge vs rebase bottle git merge vs rebase mug git merge vs rebase tshirt

if you have the same taste of dev humour and enjoy the designs please support me at redbubble =)

Setup a private nuget server

Deploy nuget server project

Nuget Server package on

The project source:

deploy the project to IIS, and configure the web.config

1. set the apiKey for pushing packages to the server
2. set packagesPath to store all the packages, the default will be ~/Packages (you need to give write permissions for the app pool user)

Pushing packages

1. using nuget CLI

nuget.exe push -Source {NuGet package source URL} -ApiKey key {your_package}.nupkg

2. using dotnet core CLI

dotnet pack --configuration release
dotnet nuget push foo.nupkg -k 4003d786-cc37-4004-bfdf-c4f3e8ef9b3a -s http://customsource/

Enable authentication for accessing nuget server

1. enable windows authentication on the server site on IIS
2. create a windows user
3. adding the repository source with username and password (using nuget CLI), it will be saved into the global nuget.config file. (normally in Nuget has a global nuget.config, saved in \Users\%AppUSer%\AppData\Roaming\NuGet)
nuget.exe sources add -name {feed name} -source {feed URL} -username {username} -password {PAT} -StorePasswordInClearText

If you don’t have the username & password, it will return a 401 unauthorized error. In visual studio, it will prompt a dialog asking for credentials.

Restore nuget packages using nuget config per solution

If you work on a new machine, and checkout source code of a project, you will need to configure the nuget source, username, password etc. To enable developers restore the packages and build the project without any hassles, we can create a nuget.config per solution.

<?xml version="1.0" encoding="utf-8"?>
<add key="Username" value="spnugetuser" />
<add key="ClearTextPassword" value="SearchParty2017" />
<add key="AWS Nuget" value="" />


Then you can call

nuget restore


dotnet restore

Cassandra tips

Use short column names

Column names take space in each cell, and if you use a big clustering key, it will be copied all over your clustered cells.

Eventually, we have found in some situations that column names (including clustering keys) take up more space than the data we wanted to store! So it is a good advice to use short column names, and short clustering keys.

You can write data in the future

Using the CQL driver you can explicitly set up the timestamp of each of your key/value pairs. One nice trick is to set up this timestamp in the future: that will make this data immutable until the date is reached.

Don’t use TimeUUID with a specific date

TimeUUID is a very common type for Cassandra column names, in particular when using wide rows. If you create a TimeUUID for the current time, this is no problem: your data will be stored chronologically, and your keys will be unique. However, if you force the date, then the underlying algorithm will not create a unique ID! Isn’t this surprising, for a “UUID” (Universal Unique Identifier) field?

As a result, only use TimeUUID if:

  • You use them at the current date
  • You force the date, but are OK with losing other data stored at the same date!

Don’t use PreparedStatement if you insert empty columns

If you have an empty column in your PreparedStatement, the CQL driver will in fact insert a null value in Cassandra, which will end up being a tombstone.

This is a very bad behavior, as:

  • Those tombstones of course take up valuable resources.
  • As a result, you can easily reach the tombstone_failure_threshold (by default at 100,000 which is in fact quite a high value).

The only solution is to have one PreparedStatement per type of insert query, which can be annoying if you have a lot of empty columns! But if you have multiple empty columns, shouldn’t you have used a Map to store that data in the first place?

Don’t use Cassandra as a queue

Using Cassandra as a queue looks like a good idea, as wide rows definitely look like queues. There are even several projects using Cassandra as a persistence layer for ActiveMQ, so this should be a good idea!

This is in fact the same problem as the previous point: when you delete data, Cassandra will create tombstones, and that will be bad for performance. Imagine you write and delete 10,000 rows, and then write 1 more row: in order to fetch that one row, Cassandra will in fact process the whole 10,001 rows…

Use the row cache wisely

By default Cassandra uses a key cache, but whole rows can also be cached. We find this rather under-used, and we have had excellent results when storing reference data (such as countries, user profiles, etc) in memory.

However, be careful of two pitfalls:

  • The row cache in fact stores a whole partition in cache (it works at the partition key level, not at the clustering key level), so putting a wide row into the row cache is a very bad idea!
  • If you put the row cache off-heap, it will be outside the JVM, so Cassandra will need to deserialize it first, which will be a performance hit.

Don’t use “select … in” queries

If you do a “select … in” on 20 keys, you will hit one coordinator node that will need to get all the required data, which can be distributed all over your cluster: it might need to reach 20 different nodes, and then it will need to gather all that data, which will put quite a lot of pressure on this coordinator node.

As the latest CQL driver can be configured to be token aware, you can use this feature to do 20 token aware, asynchronous queries. As each of those queries will directly hit the correct node storing the requested data, this will probably be more performant than doing a “select … in”, as you will gain the round trip to the coordinator node.

Configure the retry policy when several nodes fail

This of course depends whether you prefer to have high consistency or high availability: as always, the good thing with Cassandra is that this is tunable!

If you want to have good consistency, you probably have configured your queries to use a quorum (or a local_quorum is you have multiple datacenters), but what happens if you lose 2 nodes, considering you have the usual replication factor of 3? You didn’t lose any data, but as you lost the Quorum for some data, you will start to get failed queries! A good compromise would be to tune the retry policy and use the DowngradingConsistencyRetryPolicy : this will allow you to lower your consistency level temporarily, the time for you to restore one of the failed nodes and get your quorum back again.

Don’t forget to repair

The repair operation is very important in Cassandra, as this is what guarantees that you won’t have forgotten deletes. For example, this can happen when you had a hardware failure, and you bring the node back when some tombstones have expired on other nodes: Cassandra will see this deleted data as some new data (as tombstones have disappeared), and thus this data will be “resurrected” in your cluster.

Repairing nodes should be a regular and normal operation on your cluster, but as this has to be set up manually, we see many clusters where this is not done properly.

For your convenience, DataStax Enterprise, the commercial version of Cassandra, provides a “repair service” with OpsCenter, that does this job automatically.

Clean up your snapshots

Taking a snapshot is cheap with Cassandra, and can often save you after doing a wrong operation. For instance, a database snapshot is automatically created when you do a truncate, and this has already been useful to us on a production system!

However, snapshots take space, and as your stored data grow, you will need that space at one time or another: so a good process is to save those snapshots outside of your cluster (for example, uploading them to Amazon S3), and then clean them up to reclaim the disk space.


Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, etc.

I am using virtualbox as an example, you can fire up a ubuntu box with a few lines of code.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config| = "ubuntu/trusty64"

config.vm.synced_folder "./data", "/home/vagrant/data"
config.vm.provision "shell", path: "./scripts/vagrant/" "forwarded_port", guest: 8983, host: 8984, auto_correct: true "private_network", ip: ""

config.vm.provider "virtualbox" do |vb|

# Customize the amount of memory on the VM:
vb.memory = "8024"


You can forward the port from virtual box to your host, if the port is used with other programs, it can auto fix the port and assign a new one. "forwarded_port", guest: 8983, host: 8984, auto_correct: true

You can set up a virtual ip for the box. "private_network", ip: ""

Setup a sync folder that can be access both in ssh and your host machine.

config.vm.synced_folder "./data", "/home/vagrant/data"

After you have created the VagrantFile, you can call

Vagrant Up

to fire up the box.

Once configuration VagrantFile has been changed, need to call

Vagrant Reload

to refresh the virtual box.

To destroy a virtual box, call

Vagrant Destroy

To access public built vagrant boxes,

SelectListItem Helper to create selectlistitems from enum

first, we create a extension method to get the descriptions of enum values.

public static string ToDescription&lt;T&gt;(this T enumValue)
            where T : struct, IConvertible, IComparable, IFormattable // Criteria for Enums
            var fieldInfo = enumValue.GetType().GetField(enumValue.ToString());
            var attributes = fieldInfo.GetCustomAttributes(typeof(DescriptionAttribute), false).Cast&lt;DescriptionAttribute&gt;().ToList();
            return attributes.Any() ? attributes.First().Description : enumValue.ToLabel();

second, create the selectlistitems from the enum.

public static List&lt;SelectListItem&gt; GetItemsForEnum&lt;TEnum&gt;(int? selectedValue = null, string defaultText = &quot;&quot;)
            where TEnum : struct, IConvertible, IComparable, IFormattable // Criteria for Enums
            var results = new List&lt;SelectListItem&gt;();
            var values = Enum.GetValues(typeof (TEnum));
            if (!string.IsNullOrEmpty(defaultText ))
                results.Add(new SelectListItem { Text = defaultText , Value = &quot;&quot; });
            foreach (var value in values)
                var name = ((TEnum)value).ToDescription();
                var selected = (selectedValue != null) &amp;&amp; selectedValue.Equals((int)value);
                results.Add(new SelectListItem { Text = name, Value = ((int)value).ToString(), Selected = selected });
            return results;

Override EF 5 database mapping

public class UserRepo
    private UserContext _context;
    public UserRepo(UserContext context)
        _context = context;
    public User Save(User user)
        if (user.Id &lt;= 0)
        return user;
public class User
    public int Id { get; set; }
    public string Name { get; set; }
public class UserMapping : EntityTypeConfiguration&lt;User&gt;
    public UserMapping()
        HasKey(p =&gt; p.Id);
        Property(p =&gt; p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity).HasColumnName(&quot;Id&quot;);
        Property(p =&gt; p.Name).HasMaxLength(100);
public class UserContext : DbContext
    public DbSet&lt;User&gt; Users { get; set; }
    protected override void OnModelCreating(DbModelBuilder modelBuilder)
        modelBuilder.Configurations.Add(new UserMapping());
    public UserRepo UserRepo
            return new UserRepo(this);

SQL script to view sizes of all tables

    t.NAME AS TableName,
    p.rows AS RowCounts,
    SUM(a.total_pages) * 8 AS TotalSpaceKB, 
    SUM(a.used_pages) * 8 AS UsedSpaceKB, 
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
    sys.tables t
    sys.indexes i ON t.OBJECT_ID = i.object_id
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
    sys.allocation_units a ON p.partition_id = a.container_id
    t.NAME NOT LIKE 'dt%'
    AND t.is_ms_shipped = 0
    AND i.OBJECT_ID &gt; 255 
    t.Name, p.Rows

Conditional required validation or field mandatory depends on another field MVC 4

I have experienced this situation that when I need to make a field mandatory if the user has entered a value in another field (or a particular value for that field).

Here is my example, I have two radio buttons says “Do you have the purchase receipt?” with options “yes” or “no”. If the user has selected “yes”, I need them to specify the date of the purchase as well.


Now the headache is, I can’t make “Purchase Date” required field. Because if user selects “no”, they don’t need to enter the “Purchase Date”. After I did some research and lookup from the internet. I find this solution on StackOverflow. It has a few bugs and I fixed them and shared over here in my blog.

I created a RequiredIfAttribute,

public class RequiredIfAttribute : ValidationAttribute, IClientValidatable
    protected RequiredAttribute _innerAttribute;

    public string DependentProperty { get; set; }
    public object TargetValue { get; set; }

    public bool AllowEmptyStrings
            return _innerAttribute.AllowEmptyStrings;
            _innerAttribute.AllowEmptyStrings = value;

    public RequiredIfAttribute(string dependentProperty, object targetValue)
        _innerAttribute = new RequiredAttribute();
        DependentProperty = dependentProperty;
        TargetValue = targetValue;

    protected override ValidationResult IsValid(object value, ValidationContext validationContext)
        // get a reference to the property this validation depends upon
        var containerType = validationContext.ObjectInstance.GetType();
        var field = containerType.GetProperty(DependentProperty);

        if (field != null)
            // get the value of the dependent property
            var dependentValue = field.GetValue(validationContext.ObjectInstance, null);
            // trim spaces of dependent value
            if (dependentValue != null && dependentValue is string)
                dependentValue = (dependentValue as string).Trim();

                if (!AllowEmptyStrings && (dependentValue as string).Length == 0)
                    dependentValue = null;

            // compare the value against the target value
            if ((dependentValue == null && TargetValue == null) ||
                (dependentValue != null && (TargetValue.Equals("*") || dependentValue.Equals(TargetValue))))
                // match => means we should try validating this field
                if (!_innerAttribute.IsValid(value))
                    // validation failed - return an error
                    return new ValidationResult(FormatErrorMessage(validationContext.DisplayName), new[] { validationContext.MemberName });

        return ValidationResult.Success;

    public virtual IEnumerable&lt;ModelClientValidationRule&gt; GetClientValidationRules(ModelMetadata metadata, ControllerContext context)
        var rule = new ModelClientValidationRule
            ErrorMessage = FormatErrorMessage(metadata.GetDisplayName()),
            ValidationType = "requiredif",

        string depProp = BuildDependentPropertyId(metadata, context as ViewContext);

        // find the value on the control we depend on;
        // if it's a bool, format it javascript style 
        // (the default is True or False!)
        string targetValue = (TargetValue ?? "").ToString();
        if (TargetValue is bool)
            targetValue = targetValue.ToLower();

        rule.ValidationParameters.Add("dependentproperty", depProp);
        rule.ValidationParameters.Add("targetvalue", targetValue);

        yield return rule;

    private string BuildDependentPropertyId(ModelMetadata metadata, ViewContext viewContext)
        // build the ID of the property
        string depProp = viewContext.ViewData.TemplateInfo.GetFullHtmlFieldId(DependentProperty);
        // unfortunately this will have the name of the current field appended to the beginning,
        // because the TemplateInfo's context has had this fieldname appended to it. Instead, we
        // want to get the context as though it was one level higher (i.e. outside the current property,
        // which is the containing object, and hence the same level as the dependent property.
        var thisField = metadata.PropertyName + "_";
        if (depProp.StartsWith(thisField))
            // strip it off again
            depProp = depProp.Substring(thisField.Length);
        return depProp;

2. Create js validation and js unobtrusive validation, (I put them in document.ready() callback)

    function (value, element, parameters) {
        var id = '#' + parameters['dependentproperty'];

        // get the target value (as a string, 
        // as that's what actual value will be)
        var targetvalue = parameters['targetvalue'];
        targetvalue = (targetvalue == null ? '' : targetvalue).toString();

        // get the actual value of the target control
        // note - this probably needs to cater for more 
        // control types, e.g. radios
        var control = $(id);
        var controltype = control.attr('type');
        var actualvalue =
            (controltype === 'checkbox' ||  controltype === 'radio')  ?
            control.attr('checked').toString() :

        // if the condition is true, reuse the existing 
        // required field validator functionality
        if ($.trim(targetvalue) === $.trim(actualvalue) || ($.trim(targetvalue) === '*' && $.trim(actualvalue) !== ''))
            return $
              this, value, element, parameters);

        return true;

    ['dependentproperty', 'targetvalue'],
    function (options) {
        options.rules['requiredif'] = {
            dependentproperty: options.params['dependentproperty'],
            targetvalue: options.params['targetvalue']
        options.messages['requiredif'] = options.message;

3. For the Model,

        public bool HasReceipt { get; set; }

         [RequiredIf("HasReceipt", true, ErrorMessage = "You must enter purchase date")]
        [Display(Name="Purchase Date")]
        public DateTime? PurchaseDate { get; set; }

4. When reference this validation js, I notice that it only works before the unobtrusive javascript,

    <script src="~/Scripts/jquery.unobtrusive-ajax.min.js"></script>
    <script src="~/Scripts/jquery.validate.min.js"></script>
    <script src="~/Scripts/jquery.validate.requiredif.js"></script>
    <script src="~/Scripts/jquery.validate.unobtrusive.min.js"></script>

5. Now you have your conditional required validations.

There are many scenarios like this,


If you have entered an Address field, you must enter Suburb, City.

If you have selected yes for a credit card, you must enter the credit card digits.

If you have subscribed a service, you must enter a valid email.

And many more.

Google +1, share, javascript callback

Google plus button, you can set a callback attribute to a js function,

<g:plusone href="" 

In the callback js function, you can check the state, if they have clicked +1 or removed +1.

    function plusClick(data) {
        if (data.state == "on") {
            // +1
        } else if (data.state == "off") {
            // -1 (user took their +1 Away)
    (function () {
        var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true;
        po.src = '';
        var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);

There are some other js callback as well, such as onstartinteraction (when the +1 dialog popups up), onendinteraction (when +1 dialog close).

For more information, check out this google page,