Quantcast
Channel: End Point Blog
Viewing all 1118 articles
Browse latest View live

Recognizing handwritten digits - a quick peek into the basics of machine learning

$
0
0
Previous in series:
In the previous two posts on machine learning, I presented a very basic introduction of an approach called "probabilistic graphical models". In this post I'd like to take a tour of some different techniques while creating code that will recognize handwritten digits.

The handwritten digits recognition is an interesting topic that has been explored for many years. It is now considered one of the best ways to start the journey into the world of machine learning.

Taking the Kaggle challenge

We'll take the "digits recognition" challenge as presented in Kaggle. It is an online platform with challenges for data scientists. Most of the challenges have their prizes expressed in real money to win. Some of them are there to help us out in our journey on learning data science techniques — so is the "digits recognition" contest.

The challenge

As explained on Kaggle:

MNIST ("Modified National Institute of Standards and Technology") is the de facto “hello world” dataset of computer vision.

The "digits recognition" challenge is one of the best ways to get acquainted with machine learning and computer vision. The so-called "MNIST" dataset consists of 70k images of handwritten digits - each one grayscaled and of a 28x28 size. The Kaggle challenge is about taking a subset of 42k of them along with labels (what actual number does the image show) and "training" the computer on that set. The next step is to take the rest 28k of images without the labels and "predict" which actual number they present.

Here's a short overview of how the digits in a set really look like (along with the numbers they represent):


I have to admit that for some of them I have a really hard time recognizing the actual numbers on my own :)

The general approach to supervised learning

Learning from labelled data is what is called "supervised learning". It's supervised because we're taking the computer by hand through the whole training data set and "teaching" it how the data that is linked with different labels "looks" like.

In all such scenarios we can express the data and labels as:
Y ~ X1, X2, X3, X4, ..., Xn
The Y is called a dependent variable while each Xn are independent variables. This formula holds both for classification problems as well as regressions.

Classification is when the dependent variable Y is so called categorical— taking values from a concrete set without a meaningful order. Regression is when the Y is not categorical — most often continuous.

In the digits recognition challenge we're faced with the classification task. The dependent variable takes values from the set:
Y = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }
I'm sure the question you might be asking yourself now is: what are the independent variables Xn? It turns out to be the crux of the whole problem to solve :)

The plan of attack

A good introduction to computer vision techniques is a book by J. R Parker - "Algorithms for Image Processing and Computer Vision". I encourage the reader to buy that book. I took some ideas from it while having fun with my own solution to the challenge.

The book outlines the ideas revolving around computing image profiles — for each side. For each row of pixels, a number representing the distance of the first pixel from the edge is computed. This way we're getting our first independent variables. To capture even more information about digit shapes, we'll also capture the differences between consecutive row values as well as their global maxima and minima. We'll also compute the width of the shape for each row.

Because the handwritten digits vary greatly in their thickness, we will first preprocess the images to detect so-called skeletons of the digit. The skeleton is an image representation where the thickness of the shape has been reduced to just one.

Having the image thinned will also allow us to capture some more info about the shapes. We will write an algorithm that walks the skeleton and records the direction change frequencies.

Once we'll have our set of independent variables Xn, we'll use a classification algorithm to first learn in a supervised way (using the provided labels) and then to predict the values of the test data set. Lastly we'll submit our predictions to Kaggle and see how well did we do.

Having fun with languages

In the data science world, the lingua franca still remains to be the R programming language. In the last years Python has also came close in popularity and nowadays we can say it's the duo of R and Python that rule the data science world (not counting high performance code written e. g. in C++ in production systems).

Lately a new language designed with data scientists in mind has emerged - Julia. It's a language with characteristics of both dynamically typed scripting languages as well as strictly typed compiled ones. It compiles its code into efficient native binary via LLVM — but it's using it in a JIT fashion - inferring the types when needed on the go.

While having fun with the Kaggle challenge I'll use Julia and Python for the so called feature extraction phase (the one in which we're computing information about our Xn variables). I'll then turn towards R for doing the classification itself. Note that I might use any of those languages at each step getting very similar results. The purpose of this series of articles is to be a bird eye fun overview so I decided that this way will be much more interesting.

Feature Extraction

The end result of this phase is the data frame saved as a CSV file so that we'll be able to load it in R and do the classification.

First let's define the general function in Julia that takes the name of the input CSV file and returns a data frame with features of given images extracted into columns:
using DataFrames

function get_data(name :: String, include_label = true)
  println("Loading CSV file into a data frame...")
  table = readtable(string(name, ".csv"))
  extract(table, include_label)
end
Now the extract function looks like the following:
"""
Extracts the features from the dataframe. Puts them into
separate columns and removes all other columns except the
labels.

The features:

* Left and right profiles (after fitting into the same sized rect):
  * Min
  * Max
  * Width[y]
  * Diff[y]
* Paths:
  * Frequencies of movement directions
  * Simplified directions:
    * Frequencies of 3 element simplified paths
"""
function extract(frame :: DataFrame, include_label = true)
  println("Reshaping data...")
  
  function to_image(flat :: Array{Float64}) :: Array{Float64}
    dim      = Base.isqrt(length(flat))
    reshape(flat, (dim, dim))'
  end
  
  from = include_label ? 2 : 1
  frame[:pixels] = map((i) -> convert(Array{Float64}, frame[i, from:end]) |> to_image, 1:size(frame, 1))
  images = frame[:, :pixels] ./ 255
  data = Array{Array{Float64}}(length(images))
  
  @showprogress 1 "Computing features..." for i in 1:length(images)
    features = pixels_to_features(images[i])
    data[i] = features_to_row(features)
  end
  start_column = include_label ? [:label] : []
  columns = vcat(start_column, features_columns(images[1]))
  
  result = DataFrame()
  for c in columns
    result[c] = []
  end

  for i in 1:length(data)
    if include_label
      push!(result, vcat(frame[i, :label], data[i]))
    else
      push!(result, vcat([],               data[i]))
    end
  end

  result
end
A few nice things to notice here about Julia itself are:
  • The function documentation is written in Markdown
  • We can nest functions inside other functions
  • The language is statically and strongly typed
  • Types can be inferred from the context
  • It is often desirable to provide the concrete types to improve performance (but that an advanced Julia related topic)
  • Arrays are indexed from 1
  • There's the nice |> operator found e. g. In Elixir (which I absolutely love)
The above code converts the images to be arrays of Float64 and converts the values to be within 0 and 1 (instead of 0..255 originally).

A thing to notice is that in Julia we can vectorize operations easily and we're using this fact to tersely convert our number:
images = frame[:, :pixels] ./ 255
We are referencing the pixels_to_features function which we define as:
"""
Returns ImageFeatures struct for the image pixels
given as an argument
"""
function pixels_to_features(image :: Array{Float64})
  dim      = Base.isqrt(length(image))
  skeleton = compute_skeleton(image)
  bounds   = compute_bounds(skeleton)
  resized  = compute_resized(skeleton, bounds, (dim, dim))
  left     = compute_profile(resized, :left)
  right    = compute_profile(resized, :right)
  width_min, width_max, width_at = compute_widths(left, right, image)
  frequencies, simples = compute_transitions(skeleton)

  ImageStats(dim, left, right, width_min, width_max, width_at, frequencies, simples)
end
This in turn uses the ImageStats structure:
immutable ImageStats
  image_dim             :: Int64
  left                  :: ProfileStats
  right                 :: ProfileStats
  width_min             :: Int64
  width_max             :: Int64
  width_at              :: Array{Int64}
  direction_frequencies :: Array{Float64}

  # The following adds information about transitions
  # in 2 element simplified paths:
  simple_direction_frequencies :: Array{Float64}
end

immutable ProfileStats
  min :: Int64
  max :: Int64
  at  :: Array{Int64}
  diff :: Array{Int64}
end
The pixels_to_features function first gets the skeleton of the digit shape as an image and then uses other functions passing that skeleton to them. The function returning the skeleton utilizes the fact that in Julia it's trivially easy to use Python libraries. Here's its definition:
using PyCall

@pyimport skimage.morphology as cv

"""
Thin the number in the image by computing the skeleton
"""
function compute_skeleton(number_image :: Array{Float64}) :: Array{Float64}
  convert(Array{Float64}, cv.skeletonize_3d(number_image))
end
It uses the scikit-image library's function skeletonize3d by using the @pyimport macro and using the function as if it was just a regular Julia code.

Next the code crops the digit itself from the 28x28 image and resizes it back to 28x28 so that the edges of the shape always "touch" the edges of the image. For this we need the function that returns the bounds of the shape so that it's easy to do the cropping:
function compute_bounds(number_image :: Array{Float64}) :: Bounds
  rows = size(number_image, 1)
  cols = size(number_image, 2)

  saw_top = false
  saw_bottom = false

  top = 1
  bottom = rows
  left = cols
  right = 1

  for y = 1:rows
    saw_left = false
    row_sum = 0

    for x = 1:cols
      row_sum += number_image[y, x]

      if !saw_top && number_image[y, x] > 0
        saw_top = true
        top = y
      end

      if !saw_left && number_image[y, x] > 0 && x < left
        saw_left = true
        left = x
      end

      if saw_top && !saw_bottom && x == cols && row_sum == 0
        saw_bottom = true
        bottom = y - 1
      end

      if number_image[y, x] > 0 && x > right
        right = x
      end
    end
  end
  Bounds(top, right, bottom, left)
end
Resizing the image is pretty straight-forward:
using Images

function compute_resized(image :: Array{Float64}, bounds :: Bounds, dims :: Tuple{Int64, Int64}) :: Array{Float64}
  cropped = image[bounds.left:bounds.right, bounds.top:bounds.bottom]
  imresize(cropped, dims)
end
Next, we need to compute the profile stats as described in our plan of attack:
function compute_profile(image :: Array{Float64}, side :: Symbol) :: ProfileStats
  @assert side == :left || side == :right

  rows = size(image, 1)
  cols = size(image, 2)

  columns = side == :left ? collect(1:cols) : (collect(1:cols) |> reverse)
  at = zeros(Int64, rows)
  diff = zeros(Int64, rows)
  min = rows
  max = 0

  min_val = cols
  max_val = 0

  for y = 1:rows
    for x = columns
      if image[y, x] > 0
        at[y] = side == :left ? x : cols - x + 1

        if at[y] < min_val
          min_val = at[y]
          min = y
        end

        if at[y] > max_val
          max_val = at[y]
          max = y
        end
        break
      end
    end
    if y == 1
      diff[y] = at[y]
    else
      diff[y] = at[y] - at[y - 1]
    end
  end

  ProfileStats(min, max, at, diff)
end
The widths of shapes can be computed with the following:
function compute_widths(left :: ProfileStats, right :: ProfileStats, image :: Array{Float64}) :: Tuple{Int64, Int64, Array{Int64}}
  image_width = size(image, 2)
  min_width = image_width
  max_width = 0
  width_ats = length(left.at) |> zeros

  for row in 1:length(left.at)
    width_ats[row] = image_width - (left.at[row] - 1) - (right.at[row] - 1)

    if width_ats[row] < min_width
      min_width = width_ats[row]
    end

    if width_ats[row] > max_width
      max_width = width_ats[row]
    end
  end

  (min_width, max_width, width_ats)
end
And lastly, the transitions:
function compute_transitions(image :: Image) :: Tuple{Array{Float64}, Array{Float64}}
  history = zeros((size(image,1), size(image,2)))

  function next_point() :: Nullable{Point}
    point = Nullable()

    for row in 1:size(image, 1) |> reverse
      for col in 1:size(image, 2) |> reverse
        if image[row, col] > 0.0 && history[row, col] == 0.0
          point = Nullable((row, col))
          history[row, col] = 1.0

          return point
        end
      end
    end
  end

  function next_point(point :: Nullable{Point}) :: Tuple{Nullable{Point}, Int64}
    result = Nullable()
    trans = 0

    function direction_to_moves(direction :: Int64) :: Tuple{Int64, Int64}
      # for frequencies:
      # 8 1 2
      # 7 - 3
      # 6 5 4
      [
       ( -1,  0 ),
       ( -1,  1 ),
       (  0,  1 ),
       (  1,  1 ),
       (  1,  0 ),
       (  1, -1 ),
       (  0, -1 ),
       ( -1, -1 ),
      ][direction]
    end

    function peek_point(direction :: Int64) :: Nullable{Point}
      actual_current = get(point)

      row_move, col_move = direction_to_moves(direction)

      new_row = actual_current[1] + row_move
      new_col = actual_current[2] + col_move

      if new_row <= size(image, 1) && new_col <= size(image, 2) &&
         new_row >= 1 && new_col >= 1
        return Nullable((new_row, new_col))
      else
        return Nullable()
      end
    end

    for direction in 1:8
      peeked = peek_point(direction)

      if !isnull(peeked)
        actual = get(peeked)
        if image[actual[1], actual[2]] > 0.0 && history[actual[1], actual[2]] == 0.0
          result = peeked
          history[actual[1], actual[2]] = 1
          trans = direction
          break
        end
      end
    end

    ( result, trans )
  end

  function trans_to_simples(transition :: Int64) :: Array{Int64}
    # for frequencies:
    # 8 1 2
    # 7 - 3
    # 6 5 4

    # for simples:
    # - 1 -
    # 4 - 2
    # - 3 -
    [
      [ 1 ],
      [ 1, 2 ],
      [ 2 ],
      [ 2, 3 ],
      [ 3 ],
      [ 3, 4 ],
      [ 4 ],
      [ 1, 4 ]
    ][transition]
  end

  transitions     = zeros(8)
  simples         = zeros(16)
  last_simples    = [ ]
  point           = next_point()
  num_transitions = .0
  ind(r, c) = (c - 1)*4 + r

  while !isnull(point)
    point, trans = next_point(point)

    if isnull(point)
      point = next_point()
    else
      current_simples = trans_to_simples(trans)
      transitions[trans] += 1
      for simple in current_simples
        for last_simple in last_simples
          simples[ind(last_simple, simple)] +=1
        end
      end
      last_simples = current_simples
      num_transitions += 1.0
    end
  end

  (transitions ./ num_transitions, simples ./ num_transitions)
end
All those gathered features can be turned into rows with:
function features_to_row(features :: ImageStats)
  lefts       = [ features.left.min,  features.left.max  ]
  rights      = [ features.right.min, features.right.max ]

  left_ats    = [ features.left.at[i]  for i in 1:features.image_dim ]
  left_diffs  = [ features.left.diff[i]  for i in 1:features.image_dim ]
  right_ats   = [ features.right.at[i] for i in 1:features.image_dim ]
  right_diffs = [ features.right.diff[i]  for i in 1:features.image_dim ]
  frequencies = features.direction_frequencies
  simples     = features.simple_direction_frequencies

  vcat(lefts, left_ats, left_diffs, rights, right_ats, right_diffs, frequencies, simples)
end
Similarly we can construct the column names with:
function features_columns(image :: Array{Float64})
  image_dim   = Base.isqrt(length(image))

  lefts       = [ :left_min,  :left_max  ]
  rights      = [ :right_min, :right_max ]

  left_ats    = [ Symbol("left_at_",  i) for i in 1:image_dim ]
  left_diffs  = [ Symbol("left_diff_",  i) for i in 1:image_dim ]
  right_ats   = [ Symbol("right_at_", i) for i in 1:image_dim ]
  right_diffs = [ Symbol("right_diff_", i) for i in 1:image_dim ]
  frequencies = [ Symbol("direction_freq_", i)   for i in 1:8 ]
  simples     = [ Symbol("simple_trans_", i)   for i in 1:4^2 ]

  vcat(lefts, left_ats, left_diffs, rights, right_ats, right_diffs, frequencies, simples)
end
The data frame constructed with the get_data function can be easily dumped into the CSV file with the writeable function from the DataFrames package.

You can notice that gathering / extracting features is a lot of work. All this was needed to be done because in this article we're focusing on the somewhat "classical" way of doing machine learning. You might have heard about algorithms existing that mimic how the human brain learns. We're not focusing on them here. This we will explore in some future article.

We use the mentioned writetable on data frames computed for both training and test datasets to store two files: processed_train.csv and processed_test.csv.

Choosing the model

For the task of classifying I decided to use the XGBoost library which is somewhat a hot new technology in the world of machine learning. It's an improvement over the so-called Random Forest algorithm. The reader can read more about XGBoost on its website: http://xgboost.readthedocs.io/.

Both random forest and xgboost revolve around the idea called ensemble learning. In this approach we're not getting just one learning model — the algorithm actually creates many variations of models and uses them to collectively come up with better results. This is as much as can be written as a short description as this article is already quite lengthy.

Training the model

The training and classification code in R is very simple. We first need to load the libraries that will allow us to load data as well as to build the classification model:
library(xgboost)
library(readr)
Loading the data into data frames is equally straight-forward:
processed_train <- read_csv("processed_train.csv")
processed_test <- read_csv("processed_test.csv")
We then move on to preparing the vector of labels for each row as well as the matrix of features:
labels = processed_train$label
features = processed_train[, 2:141]
features = scale(features)
features = as.matrix(features)

The train-test split

When working with models, one of the ways of evaluating their performance is to split the data into so-called train and test sets. We train the model on one set and then we predict the values from the test set. We then calculate the accuracy of predicted values as the ratio between the number of correct predictions and the number of all observations.

Because Kaggle provides the test set without labels, for the sake of evaluating the model's performance without the need to submit the results, we'll split our Kaggle-training set into local train and test ones. We'll use the amazing caret library which provides a wealth of tools for doing machine learning:
library(caret)

index <- createDataPartition(processed_train$label, p = .8, 
                             list = FALSE, 
                             times = 1)

train_labels <- labels[index]
train_features <- features[index,]

test_labels <- labels[-index]
test_features <- features[-index,]
The above code splits the set uniformly based on the labels so that the train set is approximately 80% in size of the whole data set.

Using XGBoost as the classification model

We can now make our data digestible by the XGBoost library:
train <- xgb.DMatrix(as.matrix(train_features), label = train_labels)
test  <- xgb.DMatrix(as.matrix(test_features),  label = test_labels)
The next step is to make the XGBoost learn from our data. The actual parameters and their explanations are beyond the scope of this overview article, but the reader can look them up on the XGBoost pages:
model <- xgboost(train,
                 max_depth = 16,
                 nrounds = 600,
                 eta = 0.2,
                 objective = "multi:softmax",
                 num_class = 10)
It's critically important to pass the objective as "multi:softmax" and num_class as 10.

Simple performance evaluation with confusion matrix

After waiting a while (couple of minutes) for the last batch of code to finish computing, we now have the classification model ready to be used. Let's use it to predict the labels from our test set:
predicted = predict(model, test)
This returns the vector of predicted values. We'd now like to check how well our model predicts the values. One of the easiest ways is to use the so-called confusion matrix.

As per Wikipedia, confusion matrix is simply:

(...) also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each column of the matrix represents the instances in a predicted class while each row represents the instances in an actual class (or vice versa). The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabelling one as another).

The caret library provides a very easy to use function for examining the confusion matrix and statistics derived from it:
confusionMatrix(data=predicted, reference=labels)
The function returns an R list that gets pretty printed to the R console. In our case it looks like the following:
Confusion Matrix and Statistics

          Reference
Prediction   0   1   2   3   4   5   6   7   8   9
         0 819   0   3   3   1   1   2   1  10   5
         1   0 923   0   4   5   1   5   3   4   5
         2   4   2 766  26   2   6   8  12   5   0
         3   2   0  15 799   0  22   2   8   0   8
         4   5   2   1   0 761   1   0  15   4  19
         5   1   3   0  13   2 719   3   0   9   6
         6   5   3   4   1   6   5 790   0  16   2
         7   1   7  12   9   2   3   1 813   4  16
         8   6   2   4   7   8  11   8   5 767  10
         9   5   2   1  13  22   6   1  14  14 746

Overall Statistics
                                         
               Accuracy : 0.9411         
                 95% CI : (0.9358, 0.946)
    No Information Rate : 0.1124         
    P-Value [Acc > NIR] : < 2.2e-16      
                                         
                  Kappa : 0.9345         
 Mcnemar's Test P-Value : NA             

(...)
Each column in the matrix represents actual labels while rows represent what our algorithms predicted this value to be. There's also the accuracy rate printed for us and in this case it equals 0.9411. This means that our code was able to predict correct values of handwritten digits for 94.11% of observations.

Submitting the results

We got 0.9411 of an accuracy rate for our local test set and it turned out to be very close to the one we got against the test set coming from Kaggle. After predicting the competition values and submitting them, the accuracy rate computed by Kaggle was 0.94357. That's quite okay given the fact that we're not using here any of the new and fancy techniques.

Also, we haven't done any parameter tuning which could surely improve the overall accuracy. We could also revisit the code from the features extraction phase. One improvement I can think of would be to first crop and resize back - and only then compute the skeleton which might preserve more information about the shape. We could also use the confusion matrix and taking the number that was being confused the most, look at the real images that we failed to recognize. This could lead us to conclusions about improvements to our feature extraction code. There's always a way to extract more information.

Nowadays, Kagglers from around the world were successfully using advanced techniques like Convolutional Neural Networks getting accuracy scores close to 0.999. Those live in somewhat different branch of the machine learning world though. Using this type of neural networks we don't need to do the feature extraction on our own. The algorithm includes the step that automatically gathers features that it later on feeds into the network itself. We will take a look at them in some of the future articles.

See also


Series Digital joins End Point!

$
0
0

End Point has the pleasure to announce some very big news!

After an amicable wooing period, End Point has purchased the software consulting company Series Digital, a NYC-based firm that designs and builds custom software solutions. Over the past decade, Series Digital has automated business processes, brought new ideas to market, and built large-scale dynamic infrastructure.

Series Digital website snapshotSeries Digital launched in 2006 in New York City. From the start, Series Digital managed large database installations for financial services clients such as Goldman Sachs, Merrill Lynch, and Citigroup. They also worked with startups including Drop.io, Byte, Mode Analytics, Domino, and Brewster.

These growth-focused, data-intensive businesses benefited from Series Digital’s expertise in scalable infrastructure, project management, and information security. Today, Series Digital supports clients across many major industry sectors and has focused its development efforts on the Microsoft .NET ecosystem. They have strong design and user experience expertise. Their client list is global.

The Series Digital team began working at End Point on April 3rd, 2017.

The CEO of Series Digital is Jonathan Blessing. He joins End Point’s leadership team as Director of Client Engagements. End Point has had a relationship with Jonathan since 2010, and looks forward with great anticipation to the role he will play expanding End Point’s consulting business.

To help support End Point’s expansion into .NET solutions, End Point has hired Dan Briones, a 25-year veteran of IT infrastructure engineering, to serve as Project and Team Manager for the Series Digital group. Dan started working with End Point at the end of March.

The End Point leadership team is very excited by the addition of Dan, Jonathan, and the rest of the talented Series Digital team: Jon Allen, Ed Huott, Dylan Wooters, Vasile Laur, Liz Flyntz, Andrew Grosser, William Yeack, and Ian Neilsen.

End Point’s reputation has been built upon its excellence in e-commerce, managed infrastructure, and database support. We are excited by the addition of Series Digital, which both deepens those abilities, and allows us to offer new services.

Talk to us to hear about the new ways we can help you!

Amazon AWS upgrades to Postgres with Bucardo

$
0
0

Many of our clients at End Point are using the incredible Amazon Relational Database Service (RDS), which allows for quick setup and use of a database system. Despite minimizing many database administration tasks, some issues still exist, one of which is upgrading. Getting to a new version of Postgres is simple enough with RDS, but we've had clients use Bucardo to do the upgrade, rather than Amazon's built-in upgrade process. Some of you may be exclaiming "A trigger-based replication system just to upgrade?!"; while using it may seem unintuitive, there are some very good reasons to use Bucardo for your RDS upgrade:

Minimize application downtime

Many businesses are very sensitive to any database downtime, and upgrading your database to a new version always incurs that cost. Although RDS uses the ultra-fast pg_upgrade --links method, the whole upgrade process can take quite a while - or at least too long for the business to accept. Bucardo can reduce the application downtime from around seven minutes to ten seconds or less.

Upgrade more than one version at once

As of this writing (June 2017), RDS only allows upgrading of one major Postgres version at a time. Since pg_upgrade can easily handle upgrading older versions, this limitation will probably be fixed someday. Still, it means even more application downtime - to the tune of seven minutes for each major version. If you are going from 9.3 to 9.6 (via 9.4 and 9.5), that's at least 21 minutes of application downtime, with many unnecessary steps along the way. The total time for Bucardo to jump from 9.3 to 9.6 (or any major version to another one) is still under ten seconds.

Application testing with live data

The Bucardo upgrade process involves setting up a second RDS instance running the newer version, copying the data from the current RDS server, and then letting Bucardo replicate the changes as they come in. With this system, you can have two "live" databases you can point your applications to. With RDS, you must create a snapshot of your current RDS, upgrade *that*, and then point your application to the new (and frozen-in-time) database. Although this is still useful for testing your application against the newer version of the database, it is not as useful as having an automatically-updated version of the database.

Control and easy rollback

With Bucardo, the initial setup costs, and the overhead of using triggers on your production database, is balanced a bit by ensuring you have complete control over the upgrade process. The migration can happen when you want, at a pace you want, and can even happen in stages as you point some of the applications in your stack to the new version, while keeping some pointed at the old. And rolling back is as simple as pointing apps back at the older version. You could even set up Bucardo as "master-master", such that both new and old versions can write data at the same time (although this step is rarely necessary).

Database bloat removal

Although the pg_upgrade program that Amazon RDS uses for upgrading is extraordinarily fast and efficient, the data files are seldom, if ever, changed at all, and table and index bloat is never removed. On the other hand, an upgrade system using Bucardo creates the tables from scratch on the new database, and thus completely removes all historical bloat. (Indeed, one time a client thought something had gone wrong, as the new version's total database size had shrunk radically - but it was simply removal of all table bloat!).

Statistics remain in place

The pg_upgrade program currently has a glaring flaw - no copying of the information in the pg_statistic table. Which means that although an Amazon RDS upgrade completes in about seven minutes, the performance will range somewhere from slightly slow to completely unusable, until all those statistics are regenerated on the new version via the ANALYZE command. How long this can take depends on a number of factors, but in general, the larger your database, the longer it will take - a database-wide analyze can take hours on very large databases. As mentioned above, upgrading via Bucardo relies on COPYing the data to a fresh copy of the table. Although the statistics also need to be created when using Bucardo, the time cost for this does NOT apply to the upgrade time, as it can be done any time earlier, making the effective cost of generating statistics zero.

Upgrading RDS the Amazon way

Having said all that, the native upgrade system for RDS is very simple and fast. If the drawbacks above do not apply to you - or can be suffered with minimal business pain - then this way should always be the upgrade approach to use. Here is a quick walk through of how an Amazon RDS upgrade is done.

For this example, we will create a new Amazon RDS instance. The creation is amazingly simple: just log into aws.amazon.com, choose RDS, choose PostgreSQL (always the best choice!), and then fill in a few details, such as preferred version, server size, etc. The "DB Engine Version" was set as PostgreSQL 9.3.16-R1", the "DB Instance Class" as db.t2.small -- 1 vCPU, 2 GiB RAM, and "Multi-AZ Deployment" as no. All other choices are the default. To finish up this section of the setup, "DB Instance Identifier" was set to gregtest, the "Master Username" to greg, and the "Master Password" to b5fc93f818a3a8065c3b25b5e45fec19

Clicking on "Next Step" brings up more options, but the only one that needs to change is to specify the "Database Name" as gtest. Finally, the "Launch DB Instance" button. The new database is on the way! Select "View your DB Instance" and then keep reloading until the "Status" changes to Active.

Once the instance is running, you will be shown a connection string that looks like this: gregtest.zqsvirfhzvg.us-east-1.rds.amazonaws.com:5432. That standard port is not a problem, but who wants to ever type that hostname out, or even have to look at it? The pg_service.conf file comes to the rescue with this new entry inside the ~/.pg_service.conf file:

[gtest]
host=gregtest.zqsvirfhzvg.us-east-1.rds.amazonaws.com
port=5432
dbname=gtest
user=greg
password=b5fc93f818a3a8065c3b25b5e45fec19
connect_timeout=10

Now we run a quick test to make sure psql is able to connect, and that the database is an Amazon RDS database:

$ psql service=gtest -Atc "show rds.superuser_variables"session_replication_role

We want to use the pgbench program to add a little content to the database, just to give the upgrade process something to do. Unfortunately, we cannot simply feed the "service=gtest" line to the pgbench program, but a little environment variable craftiness gets the job done:

$ unset PGSERVICEFILE PGSERVICE PGHOST PGPORT PGUSER PGDATABASE
$ export PGSERVICEFILE=/home/greg/.pg_service.conf PGSERVICE=gtest
$ pgbench -i -s 4
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 400000 tuples (25%) done (elapsed 0.66 s, remaining 0.72 s)
200000 of 400000 tuples (50%) done (elapsed 1.69 s, remaining 0.78 s)
300000 of 400000 tuples (75%) done (elapsed 4.83 s, remaining 0.68 s)
400000 of 400000 tuples (100%) done (elapsed 7.84 s, remaining 0.00 s)
vacuum...
set primary keys...
done.

At 68MB in size, this is still not a big database - so let's create a large table, then create a bunch of databases, to make pg_upgrade work a little harder:

## Make the whole database 1707 MB:
$ psql service=gtest -c "CREATE TABLE extra AS SELECT * FROM pgbench_accounts"SELECT 400000$ for i in {1..5}; do psql service=gtest -qc "INSERT INTO extra SELECT * FROM extra"; done

## Make the whole cluster about 17 GB:
$ for i in {1..9}; do psql service=gtest -qc "CREATE DATABASE gtest$i TEMPLATE gtest" ; done
$ psql service=gtest -c "SELECT pg_size_pretty(sum(pg_database_size(oid))) FROM pg_database WHERE datname ~ 'gtest'"17 GB

To start the upgrade, we log into the AWS console, and choose "Instance Actions", then "Modify". Our only choices for instances are "9.4.9" and "9.4.11", plus some older revisions in the 9.3 branch. Why anything other than the latest revision in the next major branch (i.e. 9.4.11) is shown, I have no idea! Choose 9.4.11, scroll down to the bottom, choose "Apply Immediately", then "Continue", then "Modify DB Instance". The upgrade has begun!

How long will it take? All one can do is keep refreshing to see when your new database is ready. As mentioned above, 7 minutes and 30 seconds is the total time. The logs show how things break down:

11:52:43 DB instance shutdown
11:55:06 Backing up DB instance
11:56:12 DB instance shutdown
11:58:42 The parameter max_wal_senders was set to a value incompatible with replication. It has been adjusted from 5 to 10.
11:59:56 DB instance restarted
12:00:18 Updated to use DBParameterGroup default.postgres9.4

How much of that time is spent on upgrading though? Surprisingly little. We can do a quick local test to see how long the same database takes to upgrade from 9.3 to 9.4 using pg_upgrade --links: 20 seconds! Ideally Amazon will improve upon the total downtime at some point.

Upgrading RDS with Bucardo

As an asynchronous, trigger-based replication system, Bucardo is perfect for situations like this where you need to temporarily sync up two concurrent versions of Postgres. The basic process is to create a new Amazon RDS instance of your new Postgres version (e.g. 9.6), install the Bucardo program on a cheap EC2 box, and then have Bucardo replicate from the old Postgres version (e.g. 9.3) to the new one. Once both instances are in sync, just point your application to the new version and shut the old one down. One way to perform the upgrade is detailed below.

Some of the steps are simplified, but the overall process is intact. First, find a temporary box for Bucardo to run on. It doesn't have to be powerful, or have much disk space, but as network connectivity is important, using an EC2 box is recommended. Install Postgres (9.6 or better, because of pg_dump) and Bucardo (latest or HEAD recommended), then put your old and new RDS databases into your pg_service.conf file as "rds93" and "rds96" to keep things simple.

The next step is to make a copy of the database on the new Postgres 9.6 RDS database. We want the bare minimum schema here: no data, no triggers, no indexes, etc. Luckily, this is simple using pg_dump:

$ pg_dump service=rds93 --section=pre-data | psql -q service=rds96

From this point forward, no DDL should be run on the old server. We take a snapshot of the post-data items right away and save it to a file for later:

$ pg_dump service=rds93 --section=post-data -f rds.postdata.pg

Time to get Bucardo ready. Recall that Bucardo can only replicate tables that have a primary key or unique index. But if those tables are small enough, you can simply copy them over at the final point of migration later.

$ bucardo install
$ bucardo add db A dbservice=rds93
$ bucardo add db B dbservice=rds96
## Create a sync and name it 'migrate_rds':$ bucardo add sync migrate_rds tables=all dbs=A,B

That's it! The current database will now have triggers that are recording any changes made, so we may safely do a bulk copy to the new database. This step might take a very long time, but that's not a problem.

$ pg_dump service=rds93 --section=data | psql -q service=rds96

Before we create the indexes on the new server, we start the Bucardo sync to copy over any rows that were changed while the pg_dump was going on. After that, the indexes, primary keys, and other items can be created:

$ bucardo start
$ tail -f log.bucardo ## Wait until the sync finishes once$ bucardo stop
$ psql service=rds96 -q -f rds.postdata.pg 

For the final migration, we simply stop anything from writing to the 9.3 database, have Bucardo perform a final sync of any changed rows, and then point your application to the 9.6 database. The whole process can happen very quickly: well under a minute for most cases.

Upgrading major Postgres versions is never a trivial task, but both Bucardo and pg_upgrade allow it to be orders of magnitude faster and easier than the old method of using the pg_dump utility. Upgrading your Amazon AWS Postgres instance is fast and easy using the AWS pg_upgrade method, but it has limitations, so having Bucardo help out can be a very useful option.

Successful First GEOINT Symposium for End Point Liquid Galaxy

$
0
0
This past week, End Point attended GEOINT Symposium to showcase Liquid Galaxy as an immersive panoramic GIS solution to GEOINT attendees and exhibitors alike.

At the show, we showcased Cesium integrating with ArcGIS and WMS, Google Earth, Street View, Sketchfab, Unity, and panoramic video. Using our Content Management System, we created content around these various features so visitors to our booth could take in the full spectrum of capabilities that the Liquid Galaxy provides.

Additionally, we were able to take data feeds for multiple other booths and display their content during the show! Our work served to show everyone at the conference that the Liquid Galaxy is a data-agnostic immersive platform that can handle any sort of data stream and offer data in a brilliant display. This can be used to show your large complex data sets in briefing rooms, conference rooms, or command centers.

Given the incredible draw of the Liquid Galaxy, the GEOINT team took special interest in our system and formally interviewed Ben Goldstein in front of the system to learn more! You can view the video of the interview here:



We look forward to developing the relationships we created at GEOINT, and hope to participate further in this great community moving forward. If you would like to learn more please visit our website or email ask@endpoint.com.







Liquid Galaxy at The Ocean Conference

$
0
0

End Point had the privilege of participating in The Ocean Conference at the United Nations, hosted by the International Union for Conservation of Nature (IUCN), these past two weeks. The health of the oceans is critical, and The Ocean Conference, the first United Nations conference on this issue, presents a unique and invaluable opportunity for the world to reverse the precipitous decline of the health of the oceans and seas with concrete solutions.

A Liquid Galaxy was set up in a prominent area on the main floor of the United Nations. End Point created custom content for the Ocean Conference, using the Liquid Galaxy’s Content Management System. Visiting diplomats and government officials were able to experience this content - Liquid Galaxy’s interactive panoramic setup allows visitors to feel immersed in the different locations, with video and information spanning their periphery.

Liquid Galaxy content created for The Ocean Conference included:
-A study of the Catlin Seaview Survey and how the world's coral reefs are changing
-360 panoramic underwater videos
-All Mission Blue Ocean Hope Spots
-A guided tour of the Monaco Explorations 3 Year Expedition
-National Marine Sanctuaries around the United States

We were grateful to be able to create content for such a good cause, and hope to be able to do more good work for the IUCN and the UN! If you’d like to learn more, please visit our website or email ask@endpoint.com.

Zero Pricing in Interchange using CommonAdjust

$
0
0

Product pricing can be quite complex. A typical Interchange catalog will have at least one table in the ProductFiles directive (often products plus either options or variants) and those tables will often have one or more pricing fields (usually price and sales_price). But usually a single, static price isn't sufficient for more complex needs, such as accessory adjustments, quantity pricing, product grouping--not to mention promotions, sales, or other conditional features that may change a product's price for a given situation, dependent on the user's account or session.

Typically to handle these variety of pricing possibilities, a catalog developer will implement a CommonAdjust algorithm. CommonAdjust can accommodate all the above pricing adjustments and more, and is a powerful tool (yet can become quite arcane when reaching deeper complexity). CommonAdjust is enabled by setting the PriceField directive to a non-existent field value in the tables specified in ProductFiles.

To give an adequate introduction and treatise on CommonAdjust would be at a minimum its own post, and likely a series. There are many elements that make up a CommonAdjust string, and subtle operator nuances that instruct it to operate in differing patterns. It is even possible for elements themselves to return new CommonAdjust atoms (a feature we will be leveraging in this discussion). So I will assume for this writing that the reader is familiar generally with CommonAdjust and we will implement a very simple example to demonstrate henceforth.

To start, let's create a CommonAdjust string that simply replaces the typical PriceField setting, and we'll allow it to accommodate a static sales price:

ProductFiles products
PriceField 0
CommonAdjust :sale_price ;:price

The above, in words, indicates that our products live in the products table, and we want CommonAdjust to handle our pricing by setting PriceField to a non-existent field (0 is a safe bet not to be a valid field in the products table). Our CommonAdjust string is comprised of two atoms, both of which are settors of type database lookup. In the products table, we have 2 fields: sale_price and price. If sale_price is "set" (meaning a non-zero numeric value or another CommonAdjust atom) it will be used as it comes first in the list. The semicolon indicates to Interchange "if a previous atom set a price by this point, we're done with this iteration" and, thus, the price field will be ignored. Otherwise, the next atom is checked (the price field), and as long as the price field is set, it will be used instead.

A few comments here:
  • The bare colon indicates that the field is not restricted to a particular table. Typically, to specify the field, you would have a value like "products:price" or "variants:price". But part of the power of ProductFiles holding products in different tables is you can pick up a sku from any of them. And at that point, you don't know whether you're looking at a sku from products, variants, or as many additional tables as you'd like to grab products from. But if all of them have a price and sales_price field, then you can pick up the pricing from any of them by leaving the table off. You can think of ":price" as "*:price" where asterisk is "table this sku came from".
  • The only indicator that CommonAdjust recognizes as a terminal value is a non-zero numeric value. The proposed price is coerced to numeric, added on to the accumulated price effects of the rest of the CommonAdjust string (if applicable), and the final value is tested for truth. If it is false (empty, undef, or 0) then the process repeats.
  • What happens if none of the atoms produce a non-zero numeric value? If Interchange reaches the end of the original CommonAdjust string without hitting a set atom, it will relent and return a zero cost.

At this point, we finally introduce our situation, and one that is not at all uncommon. What if I want a zero price? Let's say I have a promotion for buy one product, get this other product for free. Typically, a developer would be able to expect to override the prices from the database optionally by leveraging the "mv_price" parameter in the cart. So, let's adjust our CommonAdjust to accommodate that:

CommonAdjust $ ;:sale_price ;:price

The $ settor in the first atom means "look in the line-item hash for the mv_price parameter and use that, if it's set". But as we've discussed above, we "set" an atom by making it a non-zero numeric value or another CommonAdjust atom. So if we set mv_price to 0, we've gained nothing. CommonAdjust will move on to the next atom (sale_price database settor) and pick up that product's pricing from the database. And even if we set that product's sale_price and price fields to 0, it means everyone purchasing that item would get it for free (not just our promotion that allows the item to be free with the specific purchase of another item).

In the specific case of using the $ settor in CommonAdjust, we can set mv_price to the keyword "free", and that will allow us to price the item for 0. But this restricts us to only be able to use $ and mv_price to have a free item. What if the price comes from a complex calculation, out of a usertag settor? Or out of a calc block settor? The special "free" keyword doesn't work there.

Fortunately, there is a rarely used CommonAdjust settor that will allow for a 0 price item in a general solution. As I mentioned above, CommonAdjust calculations can themselves return other CommonAdjust atoms, which will then be operated on in a subsequent iteration. This frees us from just the special handling that works on $ and mv_price as such an atom can be returned from any of the CommonAdjust atoms and work.

The settor of interest is >>, and according to what documentation there is on it, it was never even intended to be used as a pricing settor! Rather, it was to be a way of redirecting to additional modes for shipping or tax calculations, which can also leverage CommonAdjust for their particular purposes. However, the key to its usefulness here is thus: it does not perform any test on the value tied to it. It is set, untested, into the final result of this call to the chain_cost() routine and returned. And with no test, the fact that it's Perly false as numeric 0 is irrelevant.

So building on our current CommonAdjust, let's leverage >> to allow our companion product to have a zero cost (assuming it is the 2nd line item in the cart):

[calcn]
    $Items->[1]{mv_price} = '>>0';
    return;
[/calcn]

Now what happens is, $ in the first atom picks up the value out of mv_price and, because it's a CommonAdjust atom, is processed in a second iteration. But this CommonAdjust atom is very simple: take the value tied to >> and return it, untested.

Perhaps our pricing is more complex than we can (or would like to) support with using $. So we want to write a usertag, where we have the full power of global Perl at our disposal, but we still have circumstances where that usertag may need to return zero-cost items. Using the built-in "free" solution, we're stuck, short of setting mv_price in the item hash within the usertag, which we may not want to do for a variety of reasons. But using >>, we have no such restriction. So let's change CommonAdjust:

CommonAdjust $ ;[my-special-pricing] ;:sale_price ;:price

Now instead of setting mv_price in the item, let's construct [my-special-pricing] to do some heavy lifting:

UserTag my-special-pricing Routine <<EOR
sub {
    # A bunch of conditional, complicated code, but then ...
    elsif (buy_one_get_one_test($item)) {
        # This is where we know this normally priced item is supposed to be
        # free because of our promotion. Excellent!

        return '>>0';
    }
    # remaining code we don't care about for this discussion
}
EOR

Now we haven't slapped a zero cost onto the line item in a sticky fashion, like we do by setting mv_price. So presumably, above, if the user gets sneaky and removes the "buy one" sku identified by our promotion, our equally clever buy_one_get_one_test() sniffs it out, and the 0 price is no longer in effect.

For more information on CommonAdjust, see the Custom Pricing section of 'price' glossary entry. And for more examples of leveraging CommonAdjust for quantity and attribute pricing adjustments, see the Examples section of the CommonAdjust document entry.

Liquid Galaxy at 2017 BOMA International Annual Conference & Expo

$
0
0


We just returned from Nashville after bringing Liquid Galaxy to the 2017 BOMA International Annual Conference & Expo. Commercial real estate has always been a seamless fit for Liquid Galaxy due to the system's ability to showcase real estate and property data in an interactive and panoramic setting. Within real estate, Liquid Galaxy was first used by CBRE and has since been adopted by Hilton, JLL, and Prologis to name a few.

In preparation for BOMA (Building Owners and Managers Association), we prepared sample commercial real estate content on our content management system to be displayed on the Liquid Galaxy. This included the creation of content about Hudson Yards, the new development being built in lower Manhattan.

The content that was created demonstrates how brokers and directors at commercial real estate companies can tell an immersive, panoramic, and interactive story to their potential clients and investors. Future developments can be shown in developing areas, and with the content management system you can create stories that include videos, images, and 3D models of these future developments. The following video demonstrates this well:



We were able to effectively showcase our ability to incorporate 3D models and mapping layers at BOMA through the use of Google Earth, Cesium, ArcGIS, Unity, and Sketchfab. We were also able to pull data and develop content for neighboring booths and visitors, demonstrating what an easy and data-agnostic platform Liquid Galaxy can be.

We’re very excited about the increasing traction we have in the real estate industry, and hope our involvement with BOMA will take that to the next level. If you’d like to learn more about Liquid Galaxy, please visit our website or email ask@endpoint.com.

Postgres migrating SQL_ASCII to UTF-8 with fix_latin

$
0
0

Upgrading Postgres is not quite as painful as it used to be, thanks primarily to the pg_upgrade program, but there are times when it simply cannot be used. We recently had an existing End Point client come to us requesting help upgrading from their current Postgres database (version 9.2) to the latest version (9.6 - but soon to be 10). They also wanted to finally move away from their SQL_ASCII encoding to UTF-8. As this meant that pg_upgrade could not be used, we also took the opportunity to enable checksums as well (this change cannot be done via pg_upgrade). Finally, they were moving their database server to new hardware. There were many lessons learned and bumps along the way for this migration, but for this post I'd like to focus on one of the most vexing problems, the database encoding.

When a Postgres database is created, it is set to a specific encoding. The most common one (and the default) is "UTF8". This covers 99% of all user's needs. The second most common one is the poorly-named "SQL_ASCII" encoding, which should be named "DANGER_DO_NOT_USE_THIS_ENCODING", because it causes nothing but trouble. The SQL_ASCII encoding basically means no encoding at all, and simply stores any bytes you throw at it. This usually means the database ends up containing a whole mess of different encodings, creating a "byte soup" that will be difficult to sanitize by moving to a real encoding (i.e. UTF-8).

Many tools exist which convert text from one encoding to another. One of the most popular ones on Unix boxes is "iconv". Although this program works great if your source text is using one encoding, it fails when it encounters byte soup.

For this migration, we first did a pg_dump from the old database to a newly created UTF-8 test database, just to see which tables had encoding problems. Quite a few did - but not all of them! - so we wrote a script to import tables in parallel, with some filtering for the problem ones. As mentioned above, iconv was not particularly helpful: looking at the tables closely showed evidence of many different encodings in each one: Windows-1252, ISO-8859-1, Japanese, Greek, and many others. There were even large bits that were plainly binary data (e.g. images) that simply got shoved into a text field somehow. This is the big problem with SQL_ASCII: it accepts *everything*, and does no validation whatsoever. The iconv program simply could not handle these tables, even when adding the //IGNORE option.

To better explain the problem and the solution, let's create a small text file with a jumble of encodings. Discussions of how UTF-8 represents characters, and its interactions with Unicode, are avoided here, as Unicode is a dense, complex subject, and this article is dry enough already. :)

First, we want to add some items using the encodings 'Windows-1252' and 'Latin-1'. These encoding systems were attempts to extend the basic ASCII character set to include more characters. As these encodings pre-date the invention of UTF-8, they do it in a very inelegant (and incompatible) way. Use of the "echo" command is a great way to add arbitrary bytes to a file as it allows direct hex input:

$ echo -e "[Windows-1252]   Euro: \x80   Double dagger: \x87"> sample.data
$ echo -e "[Latin-1]   Yen: \xa5   Half: \xbd">> sample.data 
$ echo -e "[Japanese]   Ship: \xe8\x88\xb9">> sample.data
$ echo -e "[Invalid UTF-8]  Blob: \xf4\xa5\xa3\xa5">> sample.data 

This file looks ugly. Notice all the "wrong" characters when we simply view the file directly:

$ cat sample.data
[Windows-1252]   Euro: �   Double dagger: �
[Latin-1]   Yen: �   Half: �
[Japanese]   Ship: 船 
[Invalid UTF-8]  Blob: ����

Running iconv is of little help:

## With no source encoding given, it errors on the Euro:$ iconv -t utf8 sample.data >/dev/null
iconv: illegal input sequence at position 23## We can tell it to ignore those errors, but it still barfs on the blob:$ iconv -t utf8//ignore sample.data >/dev/null
iconv: illegal input sequence at position 123## Telling it the source is Window-1252 fixes some things, but still sinks the Ship:$ iconv -f windows-1252 -t utf8//ignore sample.data 
[Windows-1252]   Euro: €   Double dagger: ‡
[Latin-1]   Yen: ¥   Half: ½
[Japanese]   Ship: 船
[Invalid UTF-8]  Blob: ô¥£¥

After testing a few other tools, we discovered the nifty Encoding::FixLatin , a Perl module which provides a command-line program called "fix_latin". Rather than being authoritative like iconv, it tries its best to fix things up with educated guesses. It's documentation gives a good summary:

  The script acts as a filter, taking source data which may contain a mix of
  ASCII, UTF8, ISO8859-1 and CP1252 characters, and producing output will be
  all ASCII/UTF8.

  Multi-byte UTF8 characters will be passed through unchanged (although
  over-long UTF8 byte sequences will be converted to the shortest normal
  form). Single byte characters will be converted as follows:

    0x00 - 0x7F   ASCII - passed through unchanged 
    0x80 - 0x9F   Converted to UTF8 using CP1252 mappings
    0xA0 - 0xFF   Converted to UTF8 using Latin-1 mappings

While this works great for fixing the Windows-1252 and Latin-1 problems (and thus accounted for at least 95% of our table's bad encodings), it still allows "invalid" UTF-8 to pass on through. Which means that Postgres will still refuse to accept it. Let's check our test file:

$ fix_latin sample.data
[Windows-1252]   Euro: €   Double dagger: ‡
[Latin-1]   Yen: ¥   Half: ½
[Japanese]   Ship: 船
[Invalid UTF-8]  Blob: ����## Postgres will refuse to import that last part:$ echo "SELECT E'""$(fix_latin sample.data)""';" | psql
ERROR:  invalid byte sequence for encoding "UTF8": 0xf4 0xa5 0xa3 0xa5## Even adding iconv is of no help:$ echo "SELECT E'""$(fix_latin sample.data | iconv -t utf-8)""';" | psql
ERROR:  invalid byte sequence for encoding "UTF8": 0xf4 0xa5 0xa3 0xa5

The UTF-8 specification is rather dense and puts many requirements on encoders and decoders. How well programs implement these requirements (and optional bits) varies, of course. But at the end of the day, we needed that data to go into a UTF-8 encoded Postgres database without complaint. When in doubt, go to the source! The relevant file in the Postgres source code responsible for rejecting bad UTF-8 (as in the examples above) is src/backend/utils/mb/wchar.c Analyzing that file shows a small but elegant piece of code whose job is to ensure only "legal" UTF-8 is accepted:

bool
pg_utf8_islegal(const unsigned char *source, int length)
{
  unsigned char a;

  switch (length)
  {
    default:
      /* reject lengths 5 and 6 for now */
      return false;
    case 4:
      a = source[3];
      if (a < 0x80 || a > 0xBF)
        return false;
      /* FALL THRU */
    case 3: 
      a = source[2];
      if (a < 0x80 || a > 0xBF)
        return false;
      /* FALL THRU */
    case 2:
      a = source[1];
      switch (*source)
      {
        case 0xE0:
          if (a < 0xA0 || a > 0xBF)
            return false;
          break;
        case 0xED:
          if (a < 0x80 || a > 0x9F)
            return false;
          break;
        case 0xF0:
          if (a < 0x90 || a > 0xBF)
            return false;
          break;
        case 0xF4:
          if (a < 0x80 || a > 0x8F)
            return false;
          break;
        default:
          if (a < 0x80 || a > 0xBF)
            return false;
          break;
      }
      /* FALL THRU */
    case 1:
      a = *source; 
      if (a >= 0x80 && a < 0xC2)
        return false;
      if (a > 0xF4)
        return false;
      break;
  }
  return true;
}

Now that we know the UTF-8 rules for Postgres, how do we ensure our data follows it? While we could have made another standalone filter to run after fix_latin, that would increase the migration time. So I made a quick patch to the fix_latin program itself, rewriting that C logic in Perl. A new option "--strict-utf8" was added. Its job is to simply enforce the rules found in the Postgres source code. If a character is invalid, it is replaced with a question mark (there are other choices for a replacement character, but we decided simple question marks were quick and easy - and the surrounding data was unlikely to be read or even used anyway).

Voila! All of the data was now going into Postgres without a problem. Observe:

$ echo "SELECT E'""$(fix_latin  sample.data)""';" | psql
ERROR:  invalid byte sequence for encoding "UTF8": 0xf4 0xa5 0xa3 0xa5$ echo "SELECT E'""$(fix_latin --strict-utf8 sample.data)""';" | psql
                   ?column?                   
----------------------------------------------
  [Windows-1252]   Euro: €   Double dagger: ‡+
 [Latin-1]   Yen: ¥   Half: ½                +
 [Japanese]   Ship: 船                       +
 [Invalid UTF-8]  Blob: ???? 
(1 row)

What are the lessons here? First and foremost, never use SQL_ASCII. It's outdated, dangerous, and will cause much pain down the road. Second, there are an amazing number of client encodings in use, especially for old data, but the world has pretty much standardized on UTF-8 these days, so even if you are stuck with SQL_ASCII, the amount of Windows-1252 and other monstrosities will be small. Third, don't be afraid to go to the source. If Postgres is rejecting your data, it's probably for a very good reason, so find out exactly why. There were other challenges to overcome in this migration, but the encoding was certainly one of the most interesting ones. Everyone, the client and us, is very happy to finally have everything using UTF-8!


co:collective Doable Innovation Software

$
0
0

co:collective is a growth accelerator that works with leadership teams to conceive and execute innovation in the customer experience using a proprietary methodology called StoryDoing©.

Doable, one of co:collective’s recent innovations, is a cloud based platform, designed to empower employees to meaningfully contribute and collaborate on ideas that move their business forward. The tool allows companies to solicit ideas from employees at all levels of an organization, filter down those ideas, make decisions as a team, and then implement a project - all the while collaborating in a fun and easy-to-use application. Over 200 companies across multiple sectors use Doable to create new products, new features, and problem solve to keep their business growing.

co:collective engaged End Point’s front-end developer, Kamil Ciemniewski, to work with their in-house development team led by Tommy Dunn. Kamil joined the Doable effort to refactor the Doable application which was moving from a Ruby on Rails-based application to an Angular frontend application with a Ruby on Rails backend. Kamil has been working with Doable since March to complete the application re-write and the project has gone immensely well. Kamil brings extensive frontend and backend knowledge to co:collective and together they’ve been able to refactor their site to be more efficient and powerful than before.


JBoss 4/5/6 to Wildfly migration tips

$
0
0

Introduction

Recently, we have taken over a big Java project that ran on the old JBoss 4 stack. As we know how dangerous for a business is outdated software, we and our client agreed that the most important task is to upgrade the server stack to the latest WildFly version.

It’s definitely not an easy job, but it’s worth to invest to sleep well and don’t worry about software problems.

This time it was even more work because of a complicated and not documented application, that’s why I wanted to share some tips and problem resolutions for issues I encountered.

Server configuration

You can set it up using multiple configuration files in the standalone/configuration directory.

I can recommend to use the standalone-full.xml file for most of setup, it contains a default full stack as opposed to standalone.xml.

You can also set up an application specific configuration using various configuration XML files (https://docs.jboss.org/author/display/WFLY10/Deployment+Descriptors+used+In+WildFly).Remember to keep the appliction specific configuration in the Classpath.

Quartz as a message queue

The Quartz library was used as a native message queue in previous JBoss versions. If you struggle and try to use its resource adapter with WildFly just skip it. It’s definitely too much work, even if it’s possible.

In the latest WildFly version (as of today 10.1) the default message queue library is ActiveMQ. It has almost the same API as the old Quartz has, so it’s easy to use it.

Quartz as a job scheduler

We had multiple cron-like jobs to migrate as well. All the jobs used Quartz to schedule runs.

The best solution here is to update Quartz to the latest version (yes!) and use a new API (http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06.html) to create CronTriggers for the jobs.

trigger = newTrigger()
    .withIdentity("trigger3", "group1")
    .withSchedule(cronSchedule("0 42 10 * * ?"))
    .forJob(myJobKey)
    .build();

You can use the same cron syntax (e.g 0 42 10 * * ?) as in the 12 years old Quartz version. Yes!

JAR dependencies

In WildFly you can set up an internal module for each JAR dependency. It can be pretty time consuming to create declarations for more than 100 libraries (exactly 104 in our case). We decided to use Maven to handle dependencies of our application and skip declaring modules in WildFly. Why? In our opinion it’s better to encapsulate everything in an EAR file and keep WildFly configuration minimal as we won’t use our server for any other application in the future.

Just keep your dependencies in the Classpath and you will be fine.

JBoss CLI

I really prefer the bin/jboss-cli.sh interface to the web interface to handle deployments. It’s a powerful tool and much faster to work with than clicking through the UI.

JNDI path

If you can’t access your JNDI definition try to use the global namespace. Up to JAVA EE6 developers defined their own JNDI names. These names had a global scope. This doesn’t work anymore. To access a previously globally scoped name use this pattern: java:global/OLD_JNDI_NAME.

The java:global namespace was introduced in JAVA EE 6.

Reverse proxy

To configure a WildFly application with a reverse proxy you need to, of course, set up a virtual host with a reverse proxy declaration.

In addition, you must add an attribute to the server’s http-listener in the standalone-full.xml file. The attribute is proxy-address-forwarding and must be set to true.

Here is an example declaration:

Summary

If you consider to upgrade to WildFly I can recommend it, it’s much faster than JBoss 4/5/6, scalable and fully prepared for modern applications.

How to check process duration in Linux with the "ps" command

$
0
0

In certain cases we might want to get a certain process' elapsed time for our own reason. Turns out "ps" command could easily assist us in that. According to "ps" manual, etime could put the duration of time in [[DD-]hh:]mm:ss. format, while etimes in seconds.

From "ps" manpage:

etime       ELAPSED   elapsed time since the process was started, in the form [[DD-]hh:]mm:ss.
  etimes      ELAPSED   elapsed time since the process was started, in seconds.

To use that, we could use (in [[DD-]hh:]mm:ss. format):

ps -p "pid" -o etime
or in seconds:
ps -p "pid" -o etimes

In this case the "pid" should be replaced with your intended process ID.

The following will help to nicely reporting the output. We can put -o etime or -o etimes with other argument, that is "command", in order to show the executed command along with its very own absolute path:

ps -p "28590" -o etime,command
ELAPSED COMMAND
21:45 /usr/bin/perl ./fastcgi-wrapper.pl 7999

We can also get the start date of the process' execution:

najmi@ubuntu-ampang:~$ ps -p 21745 -o etime,command,start
    ELAPSED COMMAND                      STARTED
 1-19:47:45 /usr/lib/firefox/firefox      Aug 02

What if we do not want to manually parsing the PID, instead (since we are very sure) to just get the name of the running application? We could just simply use pgrep or pidof

najmi@ubuntu-ampang:~$ ps -p $(pgrep firefox) -o pid,cmd,start,etime
  PID CMD                          STARTED     ELAPSED
21745 /usr/lib/firefox/firefox      Aug 02  2-04:29:36

najmi@ubuntu-ampang:~$ ps -p $(pidof firefox) -o pid,cmd,start,etime
  PID CMD                          STARTED     ELAPSED
21745 /usr/lib/firefox/firefox      Aug 02  2-04:29:42

What if the command issued many processes? Take an example of the Chrome browser:
najmi@ubuntu-ampang:~$ ps -p $(pidof chrome) -o pid,comm,cmd,start,etime
error: process ID list syntax error

Usage:
 ps [options]

 Try 'ps --help '
  or 'ps --help '
 for additional help text.

For more details see ps(1).

The best way (so far) that I could get is by creating a loop. It seems pidof is much more accurate when parsing the exact application (string) that we feed into it.

With pgrep:
najmi@ubuntu-ampang:~$ for i in `pgrep chrome`;do ps -p $i -o pid,comm,cmd,start,etime|tail -n +2;done
 2255 chrome          /opt/google/chrome/chrome - 08:05:43    02:55:17
 4990 chrome          /opt/google/chrome/chrome - 10:39:16       21:44
 5567 chrome          /opt/google/chrome/chrome - 10:53:13       07:47
 9448 chrome          /opt/google/chrome/chrome -   Jul 31  3-12:25:08
10033 chrome          /opt/google/chrome/chrome     Jul 27  8-10:43:42
10044 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:42
10050 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:42
10187 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:39
10234 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:37
10236 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:37
19440 chrome          /opt/google/chrome/chrome - 22:30:34    12:30:26
20229 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:51:25
20514 chrome          /opt/google/chrome/chrome - 22:52:25    12:08:35
20547 chrome          /opt/google/chrome/chrome - 22:52:36    12:08:24
21009 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:27:11
22458 chrome          /opt/google/chrome/chrome -   Jul 27  8-03:44:07
22474 chrome-gnome-sh /usr/bin/python3 /usr/bin/c   Jul 27  8-03:44:07
23681 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:33:45
23691 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:33:45
23870 chrome          /opt/google/chrome/chrome - 00:15:15    10:45:45
24544 chrome          /opt/google/chrome/chrome - 00:40:17    10:20:43
25116 chrome          /opt/google/chrome/chrome - 00:51:31    10:09:29
25466 chrome          /opt/google/chrome/chrome - 00:59:55    10:01:05
29060 chrome          /opt/google/chrome/chrome - 02:15:42    08:45:18
With pidof:
najmi@ubuntu-ampang:~$ for i in `pidof chrome`;do ps -p $i -o pid,comm,cmd,start,etime|tail -n +2;done
29060 chrome          /opt/google/chrome/chrome - 02:15:42    08:47:40
25466 chrome          /opt/google/chrome/chrome - 00:59:55    10:03:27
25116 chrome          /opt/google/chrome/chrome - 00:51:31    10:11:51
24544 chrome          /opt/google/chrome/chrome - 00:40:17    10:23:05
23870 chrome          /opt/google/chrome/chrome - 00:15:15    10:48:07
23691 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:36:07
23681 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:36:07
22458 chrome          /opt/google/chrome/chrome -   Jul 27  8-03:46:29
21009 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:29:33
20547 chrome          /opt/google/chrome/chrome - 22:52:36    12:10:46
20514 chrome          /opt/google/chrome/chrome - 22:52:25    12:10:57
20229 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:53:47
19440 chrome          /opt/google/chrome/chrome - 22:30:34    12:32:48
10236 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:45:59
10234 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:45:59
10187 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:46:01
10050 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:46:04
10044 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:46:04
10033 chrome          /opt/google/chrome/chrome     Jul 27  8-10:46:04
 9448 chrome          /opt/google/chrome/chrome -   Jul 31  3-12:27:30
 5567 chrome          /opt/google/chrome/chrome - 10:53:13       10:09
 4990 chrome          /opt/google/chrome/chrome - 10:39:16       24:06
 2255 chrome          /opt/google/chrome/chrome - 08:05:43    02:57:39

There is other tool, called as stat which records the timestamp of a file but for slightly different purpose. Stay tune for the next blogpost!

How to split Git repositories into two

$
0
0

Ever wondered how to split your Git repo into two repos?


 

First you need to find out what files and directories you want to move to separate repos. In the above example we're moving dir3, dir4 and dir7 to repo A, and dir1, dir2, dir5 and dir8 to repo B.

Steps

What you need to do is to go through each and every commit in git history for every branch and filter out commits that modify directories that you dont care about in your new repo. The only flaw of this method is that it will leave those empty, filtered out commits in the history.

Track all branches

First we need to start tracking all branches locally:

for i in $(git branch -r | grep -vE "HEAD|master" | sed 's/^[ ]\+//');
  do git checkout --track $i
done

Then copy your original repo to two separate dirs: repo_a and repo_b.

cp -a source_repo repo_a
cp -a source_repo repo_b

Filter the history

Following command will delete all dirs that exclusively belong to repo B, thus we create repo A. Filtering is not limited to directories. You can provide relative paths to files, dirs etc.

cd repo_a
git filter-branch --index-filter 'git rm --cached -r dir8, dir2 || true' -- --all

cd repo_b
git filter-branch --index-filter 'git rm --cached -r dir3, dir4, dir7 || true' -- --all

Note that the `|| true` prevents git from failing to filter our dirs mentioned in the `rm` clause in early stages of the git history where the dirs did not yet exist.

Look at the list of branches once again (in both repos):

git branch -l

Set new origins and push

In every repo, we need to remove the old origin and set up new origin. After it's done, we're ready to push.

Remove old origin:

git remote rm origin

Add new origin:

git remote add origin git@github.com:YourOrg/repo_a.git

Push all tracked branches:

git push origin --all

That's it!

Client Case Study: Vervante - Publishing, Production and Fulfillment Services

$
0
0

A real-life scenario

The following is a real-life example of services we have provided for one of our clients.
Vervante Corporation provides a print on demand and order fulfillment service for thousands of customers, in their case, "Authors". Vervante needed a way for these authors to keep track of their products. Essentially they needed an Inventory management system. So we designed a complete system from the ground up that allows Vervante's authors many custom functions that simply are not offered in a pre-built package anywhere.
This is also a good time to mention that you should always view your web presence, in fact your business itself, as a process, not a one time "setup". Your products will change, your customers will change, the web will change, everything will change. If you want your business to be successful, you will change.

Some Specifics

While it is beyond the scope of this case study to describe all of the programs that were developed for Vervante, it will be valuable for the reader to sample just a few of the areas to understand how diverse a single business can be. Here are a few of the functions we have built from scratch, over several years to continue to provide Vervante, their authors, and even their vendors with efficient processes to achieve their daily business needs.

Requirements

  1. Author Requirement - First, in some cases, the best approach to a problem is to use someone else's solution! Vervante's authors have large data files that are converted to a product, and then shipped on demand as the orders come in. So we initially provided a custom file transfer process so that customers could directly upload their files to a server we set up for Vervante. Soon Vervante's rapid growth outpaced the efficacy of this system, so we investigated and determined the most efficient and cost-effective approach was to incorporate a 3rd party service. So we recommended a well known file transfer service and wrote a program to communicate with the file transfer service API. Now a client can easily describe and upload large files to Vervante.
    View File Save Process
  2. Storage Requirement - The remote storage of these large files caused Vervante a dramatic inefficiency as relates to access times, as they worked daily on these files to format, organize, and create product masters. So we needed to provide Vervante with a local server to act as a file server that was on their local network (LAN), where it could be rapidly accessed and manipulated.
    This was a challenge, as Vervante did not have IT personnel on site. So we purchased an appropriate server, set up everything in our offices, and shipped the complete server to them! They plugged the server into their local network, and with a long phone call, we had the server up and running and remotely managed.
  3. Author Requirement - On the website, the authors first wanted to see what they had in inventory. Some customers provided Vervante with some product components that needed to be included with a complete product, while others relied on Vervante to build all components of their products. They also requested a way to set minimum inventory stock requirements.
    So we built an interface that would allow authors to:
    (a) See their current stock levels for all products,
    (b) View outstanding orders for these items,
    (c) Set minimum inventory levels that they would like to have maintained at the fulfillment warehouse.
    For example a finished product may consist of a book, a CD, and a DVD. A customer may supply the CD and require Vervante to produce the book and the DVD "on demand" for the product. We created a system that tracked all items at a "base" item level, and then allowed Vervante to "build" products with as many of these "base" items as necessary, to create the final product. The base items could be combined to create an item, and two or more items could be combined to produce yet another item. It is a recursive item inventory system, built from scratch specifically for Vervante.
  4. Vervante Vendor (fulfillment warehouse) Requirement - Additionally, the fulfillment warehouse that receives, stores, builds and ships end user products, needed access to this system. They had several needs including:
    • Retrieving pending orders for shipment
    • Creating packing / pick slips for the orders
    • Create shipping labels for orders
    • Manage returns
    • Input customer supplied inventory
    • Input fulfillment created inventory
    In our administrative user interface for the fulfillment house, we developed a series of customer specific processes to address the above needs. Here is a high level example of how a few of the items on the list above are achieved:
    • The fulfillment house logs into the user admin first thing in the morning, and prints the outstanding orders.
    • The "orders" are formatted similar to a packing slip, and each slip has all line items of the order, and a bar code imprinted on the slip.
    • This document is used as a "pick" slip, and is placed in a "pick" basket. The user then goes through the warehouse, gathers the appropriate items, and when complete the order is placed on a feed belt to the shipper location.
    • When the basket lands in front of the shipper, that person checks the contents of the basket against the slip, and then uses a bar code scanner to scan the order. That scan triggers a query into our system that returns all applicable shipping data into an Endicia or UPS system.
    • A shipping label is created, and the shipping cost and tracking information is returned to the our system.
    • Additionally the inventory is decremented accordingly when the order receives a shipping label and tracking number.
  5. Requirements: administrative / accounting - Vervante also needed an administrative / accounting arm, designed to control all of the accounting functions such as:
    • Recording customers' fulfillment charges
    • Recording customers' sales (Vervante sells product for the customers as well as fulfilling outside orders)
    • Determining fulfillment vendor fees and payments
    • Tracking shipping costs
    • Monthly billing of all customers
    • Monthly payments for all customers.
    • Interface with in-house accounting systems and keeping systems in sync
    • Tracking and posting outside order transactions
The above described processes are just a few of the processes that we developed from scratch, and matched to Vervante's needs. It is also a tiny portion of their system.

Last, but not least

Oh, and one other interesting fact: When Vervante first came to us several years ago, they had fewer than 20 customers. Today, they provide order fulfillment and print on demand services for nearly 4000 customers. So when we say to plan ahead for growth, we have experience in that area.

Custom eCommerce Development

$
0
0
"Custom eCommerce" means different things to different people and organizations. For some eCommerce shopping cart sites that pump out literally hundreds of websites a year, it may mean you get to choose from a dizzying array of "templates" that can set your website apart from others.
For others, it may be a slightly more involved arrangement where you can "create categories" to group display of your products on your website ... after you have entered your products into a prearranged database schema.
There are many levels of "custom" out there. Generally speaking, the closer you get to true "custom", the more accurate the term "development" becomes.
It is very important for your business that you decide what fits your needs, and that you match your needs to a platform or company that can provide appropriate services. As you can imagine, this will depend entirely on your business.

Example scenarios

For example, a small one- or two-person business that does fulfillment of online orders may be well suited for a pre-built approach, where you pay a monthly fee to simply log into an admin, add your products, and some content, and the company does the rest. It handles all of the "details."
A slightly larger company that has maybe 5-10 employees, and possibly a staff member with some understanding of websites, may choose to purchase a package that requires more customization and company related input, and perhaps even design or choice of templates.
From this level up, decisions become far more important and complex. For example, even though the previously described company may be perfectly suited with the choice described, if sales are expected to increase dramatically in the near future, or if the company is in a niche market where custom accounting or regulations require specific handling of records, a more advanced approach may be warranted.

What we do

The purpose of this post is not to give you guidelines as to what sort of website service you should buy, or consultancy you should hire for your company. Rather it is to point out some of the types of things that we at End Point do for companies that need a higher level of custom eCommerce development. In fact, the development we do is not limited to eCommerce.

We offer a full range of business consultancy and IT development services. We can guide you through many areas of your business development. True, we primarily provide services to companies that sell things on the web. But we also provide support for inventory management systems in your warehouses, accounts receivable / payable integration with your websites, management of your POS (point of sale) machines, strategic pricing for seasonal products with expiry dates, and the list goes on.

Real-life scenarios

The following is a real-life example of services we have provided for one client.

Case Study: Vervante

Consultant vs Service

Hopefully, the real life scenarios will help serve as an example as to how complex business needs can be, and how using an out of the box "eCommerce" website, will not work in every circumstance. When you find a good business consultant, you will know it. A consultant will not try to make your business fit into their template, they will listen to you and then assemble and tailor products to fit your business.
Regardless of the state of maturity of your business, very seldom will a single "system" or "website" cover all of your business needs. It will more likely be a collection of systems. Which systems and how they work together is likely what will determine success or failure. The more mature your business, the broader the scope of systems required to support the growing requirements of your business.
So whether you are a sole proprietor getting started with your business, or you are a CTO tasked with organizing and optimizing the many systems in your organization, understanding what type of service or partner you need, is the first step. In the future I will spotlight a few other examples of how we have assisted businesses in growing and improving how they do business.

Web Security Services Roundup

$
0
0

Security is often a very difficult thing to get right, especially when it’s not easy to find reliable or up-to-date information or the process of testing can be confusing and complicated. We have a lot of history and experience working on the security of websites and servers, and we’ve found many tools and websites to be very helpful. Here is a collection of those.

Server-side security

There are a number of tools available that can scan your website to check for common vulnerabilities and the quality of SSL/TLS configuration, as well as give great tips on how to improve security for your website.

  • Qualys SSL Labs Server Test takes a simple domain name, performs a series of tests from a variety of clients, and returns a simple letter grade (from A+ down to F) indicating the quality of your SSL/TLS configuration, as well as a detailed summary for a host of configuration options. It covers certificates key and algorithms; TLS and SSL configurations; cipher suites; handshakes on a wide variety of platforms including Android, iOS, Chrome, Firefox, Internet Explorer and Edge, Safari, and others; common protocols and vulnerabilities; and other details.
  • HTTP Security Report does a similar scan, but provides a much more simplified summary of a website, with a numeric score from 0 to 100. It gives a simple, easy to understand list of results, with a green check mark or a red X to indicate whether something is configured for security or not. It also provides short paragraphs explaining settings and recommended configurations.
  • HT-Bridge SSL/TLS Server Test is very similar to Qualys SSL Labs Server Test, but provides some valuable extra information, such as PCI-DSS, HIPAA, and NIST guidelines compliance, as well as industry best practices and basic analysis of third-party content.
  • securityheaders.io is another letter-grade scan, but focuses on server headers only. It provides simple explanations for each recommended server header and links to guides on how to configure them correctly.
  • Observatory by Mozilla scans and gives information on HTTP, TLS, and SSH configuration, as well as simple summaries from other websites, including Qualys, HT-Bridge, and securityheaders.io as covered above.
  • SSL-Tools is focused on SSL and TLS configuration and certificates, with tools to scan websites and mail servers, check for common vulnerabilities, and decode certificates.
  • Microsoft Site Scan performs a series of simple tests, focused more on general website guidelines and best practices, including tests for outdated libraries and plugins which can be a security issue.
  • testssl.sh, the final website scanning tool I’ll cover, is a more advanced bash script that covers many of the same things these other websites do, but provides lots of options for fine-tuning test methods, returned information, and testing abnormal configurations. It’s also open source and doesn’t rely on any third parties.

These websites provide valuable information on SSL/TLS which can be used to create a secure, fast, and functional server configuration:

  • Security/Server Side TLS on the Mozilla wiki is a fantastic page which provides great summaries, recommendations, and reference information on many TLS topics, including handshakes, OCSP Stapling, HSTS, HPKP, certificate ciphers, and common attacks.
  • Mozilla SSL Configuration Generator is a simple tool that generates boilerplate server configuration files for common servers, including Apache and Nginx, and specific server and OpenSSL versions. It also allows you to target “modern”, “intermediate”, or “old” clients and servers, which will give the best configuration possible for each level.
  • Is TLS Fast Yet? is a great, simple, and to-the-point informational website which explains why TLS is so important and how to improve its performance so it has the smallest impact possible on your website’s speed.

Client-side security

These websites provide information and diagnostic tools to ensure that you are using a secure browser.

  • badssl.com gives a list of links to subdomains with various SSL configurations, including badly configured SSL, so you can have a good idea of what a well-configured website looks like versus one with errors in configuration, weak ciphers or key exchange protocols, or insecure HTTP forms.
  • IPv6 Test checks your network and browser for IPv6 support, showing you your ISP, reverse DNS pointers, both your IPv4 and IPv6 addresses, and giving an idea of when your computer or network may have problems with dual-stack IPv4 + IPv6 remote hosts or DNS.
  • How’s My SSL? and Qualys Labs SSL Client Test both check your browser for support of SSL/TLS versions, protocols, ciphers, and features, as well as susceptibility to common vulnerabilities.

General Tools

  • NeverSSL is a simple website that promises to never use SSL. Many public wifi networks require you to go through a payment or login page, which can be blocked when trying to access a well-secured website such as Google, Facebook, Twitter, or Amazon, which can cause trouble connecting to that website. NeverSSL provides an easy and simple way to access that login website.
  • crt.sh is a search engine for public TLS certificate information. It provides a history of certificates for a given domain name, with information including issuer and issue date, as well as an advanced search.
  • Digital Attack Map is an interactive map showing DDoS attacks across the world.
  • The Internet-Wide Scan Data Repository is a public archive of scans across the internet, intended for research and provided by the University of Michigan Censys Team.
  • take-a-screenshot.org is a simple website that shows how to take a screenshot on a variety of operating systems and desktop environments. It’s a fantastic tool to help less technically-minded people share their screens or issues they’re having.

Liquid Galaxy Goes to Chile for IMPAC4

$
0
0


Marine Protected Areas: Bringing the people and ocean together


Earlier this month, The Marine Conservation Institute and the Waitt Foundation brought a Liquid Galaxy to Chile to showcase interactive and panoramic ocean content at the Fourth International Marine Protected Areas Congress in La Serena.  This conference was a convergence of marine biologists, ocean agencies, and environmentalists from around the globe to discuss the state of and future development initiatives of Marine Protected Areas worldwide.

 The Marine Conservation Institute is working on a mapping project called MPAtlas that visually catalogs the development of Marine Protected Areas across the globe as well as the Global Ocean Refuge System which pushes for elevated standards for Marine Protected Area's and advocates for a 30% protected marine ecosystem by 2030.

mpatlas.org

We built new content to showcase the GLORES areas in Google Earth as well as data visualizations of the global system of Marine Protected Areas. In addition, we collected any past oceanographic related content we’ve developed for the Liquid Galaxy platform. This included underwater panoramic media from the Catlin Seaview Survey, Mission Blue Hope Spots, Google Street View collections of underwater photos such as this project which catalogs the Great Barrier Reef.



Minister of the Environment Marcelo Mena Carrasco gives the
Liquid Galaxy a go

Aside from showcasing this curated content, the Liquid Galaxy served as a comprehensive interactive tool for scientists and environmentalists to showcase their research areas during the break periods between lectures and events occurring at the Congress. Since the Liquid Galaxy utilizes the entire globe, conference attendees were able to free fly to their respective research and/or proposed protected areas as an impromptu presentation aide and further explore underwater terrain and panoramic media in their location.

Liquid Galaxy at IMPAC4 - La Serena, Chile 2017

The Liquid Galaxy platform is featured in museums and aquariums around the world, and we are thrilled that it is being used as a tool to study and conserve oceans and nature. We recently had the opportunity to participate in The Ocean Conference at the United Nations, and are excited the system continues to be utilized to study and prevent future detrimental changes to our planet. We hope to have more opportunities to create content geared toward nature conservation, as well as opportunities to share the Liquid Galaxy with environmentalists so that the system will continue be used as a tool for visualizing research data in new and interesting ways.

Working on production systems

$
0
0

(Not as easy as it looks)
Photo by Flickr user'edenpictures'

As consultants, the engineers at End Point are often called upon to do work on production systems - in other words, one or more servers that are vital to our client's business. These range from doing years of planning to perform a major upgrade for a long-standing client, down to jumping into a brand-new client for an emergency fix. Either way, the work can be challenging, rewarding, and a little bit nerve-wracking.

Regardless of how you end up there, following some basic good practices may reduce the chance of problems coming up, keep you calm during the chaos, and make the process easier for both you and the client.

Preparation

Unless this is a true emergency, doing major work on a production system (e.g. upgrading a database server) should involve a good amount or preparation. By the time the big night arrives (yes, it is always late at night!) you should have gone through this list:

  • Do lots and lots of testing. Use systems as close as possible to production, and run through the process until it becomes second nature.
  • Know the impact on the business, and the downtime window anticipated by the rest of the company.
  • Have a coworker familiar with everything on standby, ready to call in if needed.
  • Document the process. For a large project, a spreadsheet or similar document can be quite helpful.
    • Who are the people involved, how to contact them, and what general role do they play?
    • Which task is being done at what time, who is the primary and backup for it, and what other tasks does it block?
    • How is success for each stage measured?
    • What is the rollback plan for each step? When is the point of no return reached?
  • Setup a shared meeting space (IRC, Skype, Slack, Zulip, HipChat, etc.)
  • Confirm connections (e.g. VPN up and running? Passwords not set to expire soon? Can everyone get to Slack? SSH working well?)

Screen Up

The night usually begins with connecting to a client server via SSH. The first order of business should be to invoke 'screen' or 'tmux', which are persistent session managers (aka terminal multiplexers). These keep your connections; if your network drops, so you can easily pick up where you left off. They also allow other people to view and/or join in on what you are doing. Finally, they enable you to easily view and control multiple windows. Some screen-specific tips:

  • Name your screen something obvious to the task such as "bucardo-production-rollout". Always give it a name to prevent people from joining it by accident. I often use just my email, i.e. screen -S greg@endpoint.com or tmux new -s greg_endpoint_com.
  • Keep organized. Try to keep each window to one general task, and give each one a descriptive name:
    • screen: Ctrl-a A    tmux: Ctrl-b ,
    • I find that uppercase names stand out nicely from your terminal traffic.
    • Splitting windows by task also helps scrollback searching.
  • Boost your scrollback buffer so you can see what happened a while ago. The default value is usually much too low.
    • Inside /etc/screenrc or ~/.screenrc: defscrollback 50000
    • Inside /etc/tmux.conf or ~/.tmux.conf: set-option -g history-limit 50000
  • Develop a good configuration file. Nothing too fancy is needed, but being able to see all the window names at once on the bottom of the screen makes things much easier.
  • Consider logging all your screen output.

Discovery and Setup

If you don't already know, spend a little bit of time getting to know the server. A deep analysis is not needed, but you should have a rough idea how powerful it is (CPU, memory), how big it is (disk space), what OS it is (distro/version), and what else is running on it. Although one should be able to easily work on any unmodified *nix server, I almost always make a few changes:

  • Install minimal support software. For me, that usually means apt-get install git-core emacs-nox screen mlocate.
  • Put in some basic aliases to help out. Most important being rm='rm -i'.
  • Switch to a lower-priv account and do the work there when possible. Use sudo instead of staying as root.

For Postgres, you also want to get some quick database information as well. There are many things you could learn, but at a bare minimum check out the version, and per-user settings, the databases and their sizes, and what all the non-default configuration settings are:

select version();
\drds
\l+
select name, setting, source from pg_settings where source <> 'default' order by 1;

Version Control (git up)

One of the earliest steps is to ensure all of your work is done in a named subdirectory in a non-privileged account. Everything in this directory should be put into version control. This not only timestamps file changes for you, but allows quick recovery from accidentally removing important files. All your small scripts, your configuration files (except .pgpass), your SQL files - put them all in version control. And by version control, I of course mean git, which has won the version control wars (a happy example of the most popular tool also being the best one). Every time you make a change, git commit it.

psql

As a Postgres expert, 99% of my work is done through psql, the canonical command-line client for Postgres. I am often connecting to various database servers in quick succession. Although psql is an amazing tool, there are important considerations to keep in mind.

The .psql_history file, along with readline support, is a wonderful thing. However, it is also a great footgun, owing to the ease of using the "up arrow" and "Ctrl-r" rerun SQL statements. This can lead to running a command on server B that was previously run on server A (and which should never, ever be run on server B!). Here are some ways around this issue:

One could simply remove the use of readline when on a production database. In this way, you will be forced to type everything out. Although this has the advantage of not accidentally toggling back to an older command, the loss of history is very annoying - and it greatly increases the chance of typos, as each command needs to be typed anew.

Another good practice is to empty out the psql_history file if you know it has some potentially damaging commands in it (e.g. you just did a lot of deletions on the development server, and are now headed to production.). Do not simply erase it, however, as the .psql_history provides a good audit trail of exactly what commands you ran. Save the file, then empty it out:

$ alias clean_psql_history='cat ~/.psql_history >> ~/.psql_history.log; truncate -s0 ~/.psql_history'

Although the .psql_history file can be set per-database, I find it too confusing and annoying in practice to use. I like being able to run the same command on different databases via the arrow keys and Ctrl-r.

It is important to exit your psql sessions as soon as possible - never leave it hanging around while you go off and do something else. There are multiple reasons for this:

  • Exiting psql forces a write of the .psql_history file. Better to have a clean output rather than allowing all the sessions to interleave with each other. Also, a killed psql session will not dump its history!
  • Exiting frees up a database connection, and prevents you from unintentionally leaving something idle in transaction.
  • Coming back to an old psql session risks a loss of mental context - what was I doing here again? Why did I leave this around?
  • Less chance of a accidental paste into a window with a psql prompt (/voice_of_experience).

Another helpful thing for psql is a good custom prompt. There should always be some way of telling which database - and server - you are using just by looking at the psql prompt. Database names are often the same (especially with primaries and their replicas), so you need to add a little bit more. Here's a decent recipe, but you can consult the documentation to design your own.

$ echo "\set PROMPT1 '%/@%m%R%#%x '">> ~/.psqlrc
$ psql service=PROD
product_elements@topeka=>

Connection Service File

Using a connection service file to access Postgres can help keep things sane and organized. Connection files allow you to associate a simple name with multiple connection parameters, allowing abstraction of those details in a manner much safer and cleaner than using shell aliases. Here are some suggestions on using connection files:

  • If possible, use the local user's file (~/.pg_service.conf), but the global file is fine too. Whichever you choose, do not put the password inside them - use the .pgpass file for that.
  • Use short but descriptive service names. Keep them very distinct from each other, to reduce the chance of typing in the wrong name.
  • Use uppercase names for important connections (e.g. PROD for the connection to an important production database). This provides another little reminder to your brain that this is not a normal psql connection. (Remember, PROD = Please Respect Our Data)
  • Resist the temptation to alias them further. Force yourself to type everything out each time, e.g. "psql service=PROD" each time. This keeps your head focused on where you are going.

Dangerous SQL

Some actions are so dangerous, it is a good idea to remove any chance of direct invocation on the wrong server. The best example of this is the SQL 'truncate' command. If I find myself working on multiple servers in which I need to truncate a table, I do NOT attempt to invoke the truncate command directly. Despite all precautions, there are many ways to accidentally run the same truncate command on the wrong serve, whether via a .psql_history lookup, or simply an errant cut-n-paste error. One solution is to put the truncate into a text file, and then invoke that text file, but that simply adds the chance that this file may be invoked on the wrong database. Another solution is to use a text file, but change it when done (e.g. search and replace "truncate" to "notruncate"). This is slightly better, but still error prone as it relies on someone remembering to change the file after each use, and causes the file to on longer represent what was actually run.

For a better solution, create a function that only exists on one of the databases. For example, if you have a test database and a production database, you can run truncate via a function that only exists on the test database. Thus, you may safely run your function calls on the test server, and not worry if the functions accidentally get run on the production server. If they do, you end up with a "function not found error" rather than realizing you just truncated a production table. Of course, you should add some safeguards so that the function itself is never created on the production server. Here is one sample recipe:

\set ON_ERROR_STOP on

DROP FUNCTION IF EXISTS safetruncate(text);

CREATE FUNCTION safetruncate(tname text)
RETURNS TEXT
LANGUAGE plpgsql
AS $ic$
  BEGIN
    -- The cluster_name setting can be quite handy!
    PERFORM 1 FROM pg_settings WHERE name = 'cluster_name' AND setting = 'testserver';
    IF NOT FOUND THEN
      RAISE EXCEPTION 'Cannot create this function on this server!';
    END IF;

    EXECUTE FORMAT('truncate table %s', tname);

    RETURN 'Truncated table ' || tname;
  END;
$ic$;

Now you can create text files full of indirect truncate calls - filled with SELECT safetruncate('exultations'); instead of literal truncate calls. This file may be safely checked into git, and copy and pasted without worry.

Record keeping

The value of keeping good records of what you are doing on production systems cannot be overstated. While you should utimately use whatever system is best for you, I like to keep a local text file that spells out exactly what I did, when I did it, and what happened. These should be detailed enough that you can return to them a year later and explain to the client exactly what happened.

The primary way to document things as you go along is with good old fashioned cut and paste. Both the .bash_history and .psql_history files provide automatic tracking of entered commands, as a nice backup to your notes. Make liberal use of the tee(1) command to store command output into discrete files (which reduces the need to rely on scrollback).

Rather than entering commands directly into a psql prompt, consider putting them into a text file and then feeding that file to psql. It's a little extra work, but the file can be checked into git, giving an excellent audit trail. Plus, you may have to run those commands again someday.

If you find yourself doing the same thing over and over, write a shell script. This even applies to psql commands. For example, I recently found myself having to examine tables on two servers, and needed to quickly examine a table's size, indexes, and if anyone was currently modifying it. Thus a shall script:

psql service=PROD -c "\\d $1"
psql service=PROD -Atc "select pg_relation_size('$1')"
psql service=PROD -Atc "select query from pg_stat_activity where query ~ '$1'"

Wrap Up

When you are done with all your production tasks, write up some final thoughts on how it went, and what the next steps are. Read through all your notes while it is all fresh in your mind and clean them up as needed. Revel in your success, and get some sleep!

Using PostgreSQL cidr and inet types, operators, and functions

$
0
0

A common problem in database design is how to properly store IP addresses: They are essentially positive 32 or 128 bit integers, but they’re more commonly written as blocks of 8 or 16 bit integers written in decimal and separated by periods (in IPv4) or written in hexadecimal and separated by colons (in IPv6). That may lead many people to store IP addresses as strings, which comes with a host of downsides.

For example, it’s difficult to do subnet comparisons and validation requires complex logic or regular expressions, especially when considering the variety of ways IPv6 addresses can be stored with upper- or lower-case hexadecimal letters, :0000: or :0: for a group of zeros, or :: as several groups of zeros (but only once in an address).

Storing them as integers or a fixed number of bytes forces parsing and a uniform representation, but is otherwise no better.

To solve these problems in PostgreSQL, try using the inet and cidr types, which are designed for storing host and network addresses, respectively, in either IPv4 or IPv6 or both.

In many cases, these types could end up simply being used as glorified integers: they display nicely as IP addresses should be, and support addition, subtraction, and bitwise operations, and basic numeric comparisons:

phin=> select inet '192.168.0.1' + 256;
  ?column?
-------------
 192.168.1.1
(1 row)

phin=> select inet '::2' - 1;
 ?column?
----------
 ::1
(1 row)

phin=> select inet '192.168.0.1' - inet '192.168.0.0';
 ?column?
----------
        1
(1 row)

phin=> select inet 'Fff0:0:007a::'> inet '192.168.0.0';
 ?column?
----------
 t
(1 row)

(It’s worth noting that there is no addition operation for two inet or cidr values.)

But if you take a look at the fantastic official documentation, you’ll see that these types support also support helpful containment operators and utility functions which can be used when working with historical IP addresses of ecommerce customers stored in a database.

These include operators to check if one value contains or is contained by another (>> and <<), or the same with equality (>>= and <<=), or containment going either way (&&). Take this very real™ table of ecommerce orders, where order #3 from IP address 23.239.26.78 has committed or attempted some sort of ecommerce fraud:

phin=> select * from orders order by id;
 id |        ip        | fraud
----+------------------+-------
  1 | 23.239.26.161/24 | f
  2 | 23.239.232.78/24 | f
  3 | 23.239.26.78/24  | t
  4 | 23.239.23.78/24  | f
  5 | 23.239.26.78/24  | f
  6 | 23.239.232.78/24 | f
(6 rows)

By using the network(inet) function, we can identify other orders from the same network:

phin=> select * from orders where network(ip) = (
    select network(ip) from orders where id=3
);
 id |        ip        | fraud
----+------------------+-------
  1 | 23.239.26.161/24 | f
  5 | 23.239.26.78/24  | f
  3 | 23.239.26.78/24  | t
(3 rows)

Or, if we’ve identified 23.239.20.0/20 as a problematic network, we can use the <<="is contained by or equals" operator:

phin=> select * from orders where ip <<= inet '23.239.20.0/20'
 id |        ip        | fraud
----+------------------+-------
  1 | 23.239.26.161/24 | f
  4 | 23.239.23.78/24  | f
  5 | 23.239.26.78/24  | f
  3 | 23.239.26.78/24  | t
(4 rows)

This is a bit of a juvenile example, but given a larger and more real database, these tools can be used for a variety of security and analytics purposes. Identifying common networks for fraud was already given as an example. They can also be used to find common network origins of attacks; tie historic data to known problematic networks, addresses, and proxies; help correlate spam submissions on forums, blogs, or in email; and create and analyze complex network or security auditing records.

At any rate, they’re a useful tool and should be taken advantage of whenever the opportunity presents itself.

AngularJS is not Angular

$
0
0

Why I'm doing this?

I often see that people use AngularJS and Angular names interchangeably. It's an obvious mistake for someone familiar with the front-end programming.

If you don't believe me (why would you?) just go to the Angular team repository or Wikipedia (Angular and AngularJS).

AngularJS (1.X)

AngularJS is a popular open-source JavaScript framework created by Miško Hevery released in 2010 that was subsequently taken over by Google. It's not the same thing as Angular. The latest stable version of the framework is 1.6.6.

Angular (2+)

Angular is an open-source TypeScript-based framework created by Google from scratch. It's been totally rewritten and not compatible with AngularJS. The latest stable version of the framework is 4.4.4. They started from version 2 to avoid even more confusion.

Summary

I feel I have made the world better. Go now and correct everyone (it's such fun)!

Disaster Recovery - Miami to Dallas in one hit

$
0
0
Hurricane Irma came a knocking but didn’t blow Miami away.

Hurricanes are never fun, and neither is knowing that your business servers may be in the direct path of a major outage due to a natural disaster.

Recently we helped a client prepare a disaster recovery environment, 4 days out pending Hurricane Irma approaching Miami — where it happens their entire server ecosystem sat. In recent months we had been discussing a long-term disaster recovery plan for this client’s infrastructure, but the final details hadn’t been worked out yet, and no work begun on it, when the news of the impending storm started to arrive.

Although the Miami datacenter they are using is highly regarded and well-rated, the client had the foresight to think about an emergency project to replicate their entire server ecosystem out of Miami to somewhere a little safer.

Fire suits on, battle helmets ready, GO! Two team members jumped in and did an initial review of the ecosystem: six Linux servers, one Microsoft SQL Server database with about 50 GB of data, and several little minions. All of this hosted on a KVM virtualization platform. OK, easy, right? Export the VM disks and stand up in a new datacentre on a similar KVM-based host.

But no downtime could be scheduled at that time, and cloning the database would take too much time to make a replica. If we can’t clone the VM disks without downtime we need another plan. Enter our save-the-day idea: use the database clone from the development machine.

So first things first: Build the servers in the new environment using the cloned copy, rsync the entire disk across, hoping that a restart would return the machine into a working state, take the development database and use the Microsoft tools at our disposal to replicate one current snapshot, then replicate row level details, in preparation of reducing the amount of lost data, should the datacenter lose connection.

Over the course of the next 16 hours, with 2 DevOps engineers, we replicated 6 servers, migrated 100+ GB of data twice and had a complete environment ready for a manual cutover.

Luckily the hurricane didn’t cause any damage to the datacenter, which never lost power or connectivity, so the sites never went down.

Even though it wasn’t used in this case, this emergency work did have a silver lining: Until this client’s longer-term disaster recovery environment is built, these system replicas can serve as cold standby, which is much better than having to rebuild from backups if their primary infrastructure goes out.

That basic phase is out of the way, so the client can breathe easier. Along the way, we produced some updated documentation, and gained deeper knowledge of the client’s software ecosystem. That’s a win!

The key takeaways here: Don’t panic, and do a little planning prior. Leaving things to the last minute rarely works well.
And preferable to all of the above: Plan and build your long-term disaster recovery environment now, well before any possible disaster is on the horizon!

To all the people affected by Hurricane Irma we wish a speedy rebuild.
Viewing all 1118 articles
Browse latest View live