Sunday, May 1, 2016

Are You, or Someone You Know, Being a Crab at Work?

We are human.  We are emotionally driven creatures.  It's who we are.  We should embrace our humanness, but we should also try to be aware when our natural tendencies result in destructive negativity.  When we discover these tendencies, we owe to to ourselves and to others to make an effort to improve our relationships.

We have the tendency to value our contributions and accomplishments more than we value the contributions and accomplishments of others.  I know I've done it, and I feel the hurt when it is done to me.  That means I've hurt others when I have done it to them.

A great illustration of this natural, normal human tendency is the observation of crabs in a bucket.  If you put a crab in a bucket, and it can get out, then it will get out.  If you fill the bucket with crabs, however, they will pull each other down so none escape.  this ensures their collective demise.


We fool ourselves into thinking we are giving constructive criticism when we pull others down around us.  It's easy to disguise negativity as trying to help others improve.  Here are some example statements:

Example 1:   "You should not be trying to solve problem X,  because your solution does nothing to address problem Y."

The conclusion that X should not be addressed because Y exists does not follow.  Now, if we knew that solving Y had more value, we should solve Y first.  But that still doesn't mean that solving X has no value.  It also does not mean that we should not have tried to solve problem X before we knew about problem Y.

 Example 2:  "Yes, you were successful in helping members of team P by doing A, but they have problems B, C and D that A does not address.  You work for team Q and should not be helping team P.  Team P needs to solve other problems that you cannot address."

The conclusion that we should not help others because we can't solve all of their problems is a non sequitur.  We should think of the greater organization when it is appropriate to do so.  If helping others is low effort and high reward, then that is the thing we should do.  If others in the organization are resisting their own crab-like tendencies, they will show appreciation and recognize your value.

My challenge to you, fellow human, is to observe how you observe the accomplishments of others.  I'll say it again -- observe how you observe.  Do you have a heart of gratitude for the good work they produced, or does your mind immediately go to the bottom of the crab bucket? As a human, my mind goes to the bottom of the crab bucket.  If I'm not recognized for my accomplishments, I don't want to acknowledge the accomplishments of others.  I try to be intentional about appreciating others, but it is not natural.  It takes effort to encourage others to give their best.

Here is a video of how I envision our organization of crabs working together:





Thursday, January 9, 2014

Scala External DSL's -- a 12 Step Program

It's been months since I've posted.  I've been working on various Scala projects and sharpening my skills.  I had a major breakthrough on crafting Domain-Specific Languages (DSL's).  The secret sauce was learning Scala Parser Combinators.  Parser Combinators make it easy to create lexical rules that tie to mini-parsers to build objects from the bottom up.  In addition to the basic JSON parser from "Programming in Scala" (published by Artima), I've created two useful DSL's.  The first was a DSL to configure a tic-tac-toe playing AI, and the second was a DSL for configuring badges for gamifying applications.

The JSON example from "Programming in Scala" does not demonstrate test-driving a DSL, so I created a project on Github to run through the JSON example as a code kata.  You can access this example here:  https://github.com/tflander/scalaJsonDsl

The first time you go through the kata, it's highly recommended that you either refer to the example in the book, or cheat by looking at the solutions.  You can't really learn parser combinators by guessing.  You either know how to use them or you don't.  Fortunately, they are pretty easy once they click in your head.  They are much easier than JavaCC or JParsec.
Note this project has branches step0 through stepN. These branches form a step-by-step kata for test driving a scala-json parser.
To work through this project as a code kata, check out step0. Build and run tests. Fix the broken test, then check out the next step for the next challenge. Repeat until you've build the entire JSON parser for adding JSON as a DSL.
The project has the eclipse plug-in configured. If you want to import your project into eclipse, do a 'sbt eclipse' to generate the eclipse project files for import. You can also use typesafe activator instead of sbt.
Enjoy!
adopted from "Programming in Scala: A comprehensive step-by-step guide", 2nd edition Martin Odersky, Lex Spoon, Bill Venners. published by Artima



Thursday, March 21, 2013

Lessons Learned from a Group Code Review

I experienced the Ann Arbor Scala Enthusiasts User Group last night.  I drove from my workplace in Farmington Hills to Ann Arbor, then back home to Warren.  I wanted to get a panel of experts to review and critique some of the scala code I've been writing.  They were very gracious in honoring my request.  Here are some of my lessons learned:

  • Expect to experience an emotional response to criticism.  I went into the office of SRT Solutions with the selfish purpose of learning how to be a better Scala developer.  I wanted to reduce the learning curve and was grateful for the feedback I received.  Still, I could feel the blood rush to my face as I was following the direction of the group.  I felt hot as I was moving code as fast as I could in response to the input I solicited.  This response took me by surprise.  We humans are emotional creatures.  While we should seek to grow by exposing ourselves to uncomfortable situations, we will still experience discomfort while growing.
  • Don't abandon SRP just because you are working in a more powerful language.  SRP is the Single Responsibility Principle. In my simple mind it means:  At any level of abstraction -- closure, function, class, package, or project -- do one thing and do it well.  Unlike Java, Scala allowed nesting functions.  This allows you to bury implementation details in a method without having to create private methods or an object model for delegation.  With this power comes responsibility.  You want your code to be expressive.  Don't force the developers who come behind you to wade through a bunch of closures of implementation details just to figure out what a method is doing.
  • Don't be afraid to write ugly code, but go back and clean it.  This is not a new lesson, but I wanted to contrast the previous point regarding SRP.  Since Scala is more powerful than Java, try to find out where nested methods and other constructs make sense.  Push the boundaries, then scale  back to what is reasonable.  Be an expert in cleaning ugly code.  Write code fast, but take a step back with every success and think about the person who has to read your code.  Be fiscally responsible with their time.  Developers spend more time reading code then writing code.  Be willing to sacrifice 10 minutes of time for code cleanup to save 30 minutes of the next developer's time.
  • Don't be in a rush to dial up your Scala skills to 11.  Scala is an evolving language.  The language developers and enthusiasts continue to debate and refine the core language features. Be judicious about the language features you integrate into your applications and systems development.  You don't want to rely on a feature that may be deprecated in a future release.
  • Not everyone should be a language developer.  Even though the name Scala is derived from the term "Scalable Language", don't think that you need to develop a robust DSL (domain-specific language) for every business problem.  Do you really need the overhead of that implicit converter just to make your code read like English text?
Much gratitude to Diane Marsh and the Ann Arbor Scala Enthusiasts group for freely giving me these lessons.  I encourage you to seek out people who have blazed whatever path you are currently taking in life.

Saturday, February 9, 2013

Types of Badges for Achievements

I've been working on a badges system.  I'm beginning to realize that it's possible to create fairly complex rules around badges.  Any particular system is likely to implement a small sub-set of possible badge types.  I want to keep an open mind on how different types of badges behave, but don't want to build complexity that won't be used.  This post serves to document ideas that I may or may not build out in code.

Simple badge -- Represents an achievement and does not have any special rules.  When unearned, the user gets to see a gray version of the badge and the description of how to earn the badge.  You can only earn a simple badge once.

Stacked badge -- A badge that you can earn multiple times.  I'm not sure why this might be useful.  See Leveled badge.

Mystery badge -- When unearned, the user gets to see an obscured placeholder indicating that they need to explore the system to discover how to earn the mystery badge.

Surprise badge -- When unearned, the user does not see that the badge exists.  These are create for user delight, rather to encourage specific behavior.  An example might be a "Night Owl" badge earned when the user does something between midnight and 4:00 am.  Showing this badge as unearned would ruin the surprise.

Progressed Badge -- A badge that requires X number of events to earn.  Earners see a visual indication of progress as they get closer to earning the badge.

Leveled badge -- A badge that requires a task to be performed multiple times.  When unearned, the user sees their progress.  An example might be to access the system for 5 consecutive days, 10 consecutive days, and 25 consecutive days.  Potentially, leveled badge could also be mystery badges or surprise badges.  If this is the case, progress would be tracked, but not shown when the badge is unearned.  When viewing earned badges, high level badges would replace low level badges.

Locked badge -- A badge that doesn't appear as unearned until the user earns a badge that acts acts as prerequisite.  The purpose is to avoid cluttering an earner's list of unearned badges.  Leveled badges could be locked badges.  In this case, the earner would only see the next unearned leveled badge, and would not see how many levels exist.

Forked & Locked badge -- A locked badge where unlocking opens up multiple paths.  For example, if a learning system has flash cards, earning the flash card badge could unlock badges for repetition, for speed and accuracy, and for exploring a number of card decks.

Meta badge -- Behaves like a simple badge, except that it is earned by earning a specific combination or number of other badges.

Scored badge -- A badge with a point value.  Easy badges would be worth a low number of points, and more difficult badges would have a higher point value.  Scored badges are intended to encourage behavior that requires more commitment from the user.

Some kind of badge type that I didn't think of? -- I don't know what this would be.

Monday, January 28, 2013

Processing Images with Imagemagick, Scala, and Heroku

Imagemagick is a popular and powerful image processing tool.  You can use it from any JVM language such as Java and Scala by using im4java.  The im4java library is not a pure Java library.  Rather, it wraps the command-line Imagemagick program.  Fortunately, Imagemagick is popular enough to be available through cloud computing platforms such as Heroku.

I'm working on a badges application.  I have two image processing requirements:

  • Resize an uploaded badge to small, meduim, and large with conversion to PNG
  • Create gray versions of badges in each size to represent unearned badges
My requirements are simple, but it's nice that Imagemagick gives me a lot of headroom should more complex requirements come down the pipeline in the future.

These features were easy to implement in Scala using the following code:

object BadgeImageProcessor {

  def resize(originalImage: BufferedImage, size: Int): BufferedImage = {

    val cmd = new ConvertCmd
    val s2b = new Stream2BufferedImage()
    cmd.setOutputConsumer(s2b)

    def createResizeCommands(): IMOperation = {
      val op = new IMOperation()
      op.addImage()
      op.resize(size, size)
      op.addImage("png:-")
      return op
    }

    cmd.run(createResizeCommands, originalImage)
    s2b.getImage()
  }

  def grayscale(originalImage: BufferedImage): BufferedImage = {
    val cmd = new ConvertCmd
    val s2b = new Stream2BufferedImage()
    cmd.setOutputConsumer(s2b)

    def createGrayScaleCommands(): IMOperation = {
      val op = new IMOperation()
      op.addImage()
      op.colorspace("gray")
      op.addImage("png:-")
      return op
    }

    cmd.run(createGrayScaleCommands, originalImage)
    s2b.getImage()
  }
}

The program flow is straight-forward:

  • Create an imageMagick comand that outputs a BufferedImage from a reponse stream
  • Define a closure to apply operations to the original buffered image and output the response as a PNG BufferedImage
  • Run the operations against the original image, and return the resulting BufferedImage
The program is straight-forward, but the code is repetitive.  If I was using Java, I would eliminate the repetition by creating an abstract image processor taking polymorphic processing commands.  With Scala, I use higher-order functions.  This results in less coding and avoids object abstraction:

object BadgeImageProcessor {

  def resize(originalImage: BufferedImage, size: Int): BufferedImage = {
    def toNewSize(op: IMOperation, size: Int) = {
      op.resize(size, size)
    }    
    processImageMagick(originalImage, toNewSize(_, size))
  }

  def grayscale(originalImage: BufferedImage): BufferedImage = {
    def toGray(op: IMOperation) = {
      op.colorspace("gray")
    }
    processImageMagick(originalImage, toGray(_))
  }

  private def processImageMagick(originalImage: BufferedImage, commands: IMOperation => Unit): BufferedImage = {
    val cmd = new ConvertCmd
    val s2b = new Stream2BufferedImage()
    cmd.setOutputConsumer(s2b)

    def createGrayScaleCommands(): IMOperation = {
      val op = new IMOperation()
      op.addImage()
      commands(op)
      op.addImage("png:-")
      return op
    }

    cmd.run(createGrayScaleCommands, originalImage)
    s2b.getImage()
  }
}


 Functional programming allows you to pass a function into another function.  The method "processImageMagick(...)" takes two parameters;  the image to process and a function that takes an IMOperation and returns nothing special (Unit).  The magic is the ability to specify a placeholder (the underscore character) to represent the undefined IMOperation, allowing processImageMagick(...) to define the IMOperation and call the "toGray(_)" or toNewSize(_)" methods.

Since I'm only performing one ImageMagick operation in each of my methods (resize and grayscale), I can further simplify the code by in-lining the closures:


  def resize(originalImage: BufferedImage, size: Int): BufferedImage = {
    processImageMagick(originalImage, _.resize(size, size))
  }

  def grayscale(originalImage: BufferedImage): BufferedImage = {
    processImageMagick(originalImage, _.colorspace("gray"))
  }

...I could probably eliminate the need for closures altogether by allowing processImageMagick(...) to take any number of commands, but I'll save that for another day.

The next step is to move the method "processImageMagick(...)" to a utility object, but hopefully you get the idea on how to use higher-order functions to eliminate repetitive boilerplate code without introducing object patterns.

Thursday, January 3, 2013

Success Using Scala to Create a DSL

Update (March):  I've given up on the idea of creating DSL's for every business domain I work in.  I've concluded that Scala is not that scalable (at least not yet).  The original January post follows:

It took three rounds of making mistakes, but I finally have something elegant.

The primary goal was to represent a route between to airports using natural language.  For example, I wanted to be able to write the following line of code:

val routeFromDetroitToPhilly = "DTW" to "PHL"

I also wanted my model code to be pristine.  I wanted to separate any syntactic sugar from my model code, and to avoid any circular dependencies between model objects.  

Here are my model objects:

case class Airport(code: String)
case class Route(origin: Airport, destination: Airport)

A route links origin and destination Airports.  The Airport is only dependent on String.  This is good.  There was problem introduced, however, when I tried to use the "to" keyword to create a Route from two Airports.  You can see the issue if I re-write the desired code in a way that exposes some of the magic:


val routeFromDetroitToPhilly = Airport("DTW").to(Airport("PHL"))

My initial attempts to implement the "to" keyword involved introducing a circular dependency between Airport and Route:


case class Airport(code: String) {
  def to(destination: Airport) = Route(this, destination)
}
case class Route(origin: Airport, destination: Airport)

This worked, but is nasty.  Route depends on Airport and Airport depends on Route.  My first attempt was even worse.  I tried adding the "to" method for route creation though inheritance.  Not a good idea.  I'm too ashamed to publish that failed attempt.

I finally figured out that I could put my syntactic sugar in a model helper class:

object ModelHelper {

  class AirportRouteBuilder(origin: Airport) {
    def to(destination: Airport): Route = {
      Route(origin, destination)
    }
  }
  
  implicit def stringToAirport(code: String) = Airport(code)
  
  implicit def stringToAirportRouteBuilder(airportCode: String) = new AirportRouteBuilder(Airport(airportCode))
  
  implicit def airportToAirportRouteBuilder(airport: Airport) = new AirportRouteBuilder(airport)
}

Now when I write this code:


val routeFromDetroitToPhilly = "DTW" to "PHL"

...Scala resolves the code as follows:
  • The function "to(...)" is invalid for the String "DTW", but we have an implicit method "stringToAirportRouteBuilder()" that creates an AiportRouteBuilder object that has a "to(...)" method
  • Call "stringToAirportRouteBuilder("DTW")" to create an AirportRouteBuilder object
  • Call "StringToAirport("PHL")" to create the Airport object needed for "AirportRouteBuilder.to(...)"
  • Call the "to()" method on AirportRouteBuilder to construct a new Route object that links the origin and destination airports
This is pretty mind-blowing stuff.  It seems too complex and magical, but I believe it's a way of thinking that we can get used to.  It's like eating ethnic food for the first time or meeting an ugly person for the first time.  After the initial shock, you get used to it if you allow yourself the opportunity to appreciate.


Tuesday, October 30, 2012

Struggling with Scala as a DSL

Scala was created to be a Scaleable Language.  I'm trying to figure out how well Scala has delivered on this promise.  To me, a language is scalable if it is useful for creating Domain-Specific programming languages (DSL's).  I've been going through examples and trying things on my own.  Here is a good example:  http://debasishg.blogspot.com/2008/05/designing-internal-dsls-in-scala.html