Author Archive: Phil Jackson

May 10, 2013

Understanding and Implementing Coding Standards

Coding standards provide a consistent framework for development within a project and across projects in an organization. A dozen programmers can complete a simple project in a dozen different ways by using unique coding methodologies and styles, so I like to think of coding standards as the "rules of the road" for developers.

When you're driving in a car, traffic is controlled by "standards" such as lanes, stoplights, yield signs and laws that set expectations around how you should drive. When you take a road trip to a different state, the stoplights might be hung horizontally instead of vertically or you'll see subtle variations in signage, but because you're familiar with the rules of the road, you're comfortable with the mechanics of driving in this new place. Coding standards help control development traffic and provide the consistency programmers need to work comfortably with a team across projects. The problem with allowing developers to apply their own unique coding styles to a project is the same as allowing drivers to drive as they wish ... Confusion about lane usage, safe passage through intersections and speed would result in collisions and bottlenecks.

Coding standards often seem restrictive or laborious when a development team starts considering their adoption, but they don't have to be ... They can be implemented methodically to improve the team's efficiency and consistency over time, and they can be as simple as establishing that all instantiations of an object must be referenced with a variable name that begins with a capital letter:

$User = new User();

While that example may seem overly simplistic, it actually makes life a lot easier for all of the developers on a given project. Regardless of who created that variable, every other developer can see the difference between a variable that holds data and one that are instantiates an object. Think about the shapes of signs you encounter while driving ... You know what a stop sign looks like without reading the word "STOP" on it, so when you see a red octagon (in the United States, at least), you know what to do when you approach it in your car. Seeing a capitalized variable name would tell us about its function.

The example I gave of capitalizing instantiated objects is just an example. When it comes to coding standards, the most effective rules your team can incorporate are the ones that make the most sense to you. While there are a few best practices in terms of formatting and commenting in code, the most important characteristics of coding standards for a given team is consistency and clarity.

So how do you go about creating a coding standard? Most developers dislike doing unnecessary work, so the easiest way to create a coding standard is to use an already-existing one. Take a look at any libraries or frameworks you are using in your current project. Do they use any coding standards? Are those coding standards something you can live with or use as a starting point? You are free to make any changes to it you wish in order to best facilitate your team's needs, and you can even set how strict specific coding standards must be adhered to. Take for example left-hand comparisons:

if ( $a == 12 ) {} // right-hand comparison
if ( 12 == $a ) {} // left-hand comparison

Both of these statements are valid but one may be preferred over the other. Consider the following statements:

if ( $a = 12 ) {} // supposed to be a right-hand comparison but is now an assignment
if ( 12 = $a ) {} // supposed to be a left-hand comparison but is now an assignment

The first statement will now evaluate to true due to $a being assigned the value of 12 which will then cause the code within the if-statement to execute (which is not the desired result). The second statement will cause an error, therefore making it obvious a mistake in the code has occurred. Because our team couldn't come to a consensus, we decided to allow both of these standards ... Either of these two formats are acceptable and they'll both pass code review, but they are the only two acceptable variants. Code that deviates from those two formats would fail code review and would not be allowed in the code base.

Coding standards play an important role in efficient development of a project when you have several programmers working on the same code. By adopting coding standards and following them, you'll avoid a free-for-all in your code base, and you'll be able to look at every line of code and know more about what that line is telling you than what the literal code is telling you ... just like seeing a red octagon posted on the side of the road at an intersection.

-@SoftLayerDevs

March 22, 2013

Social Media for Brands: Monitor Twitter Search via Email

If you're responsible for monitoring Twitter for conversations about your brand, you're faced with a challenge: You need to know what people are saying about your brand at all times AND you don't want to live your entire life in front of Twitter Search.

Over the years, a number of social media applications have been released specifically for brand managers and social media teams, but most of those applications (especially the free/inexpensive ones) differentiate themselves only by the quality of their analytics and how real-time their data is reported. If that's what you need, you have plenty of fantastic options. Those differentiators don't really help you if you want to take a more passive role in monitoring Twitter search ... You still have to log into the application to see your fancy dashboards with all of the information. Why can't the data come to you?

About three weeks ago, Hazzy stopped by my desk and asked if I'd help build a tool that uses the Twitter Search API to collect brand keywords mentions and send an email alert with those mentions in digest form every 30 minutes. The social media team had been using Twilert for these types of alerts since February 2012, but over the last few months, messages have been delayed due to issues connecting to Twitter search ... It seems that the service is so popular that it hits Twitter's limits on API calls. An email digest scheduled to be sent every thirty minutes ends up going out ten hours late, and ten hours is an eternity in social media time. We needed something a little more timely and reliable, so I got to work on a simple "Twitter Monitor" script to find all mentions of our keyword(s) on Twitter, email those results in a simple digest format, and repeat the process every 30 minutes when new mentions are found.

With Bear's Python-Twitter library on GitHub, connecting to the Twitter API is a breeze. Why did we use Bear's library in particular? Just look at his profile picture. Yeah ... 'nuff said. So with that Python wrapper to the Twitter API in place, I just had to figure out how to use the tools Twitter provided to get the job done. For the most part, the process was very clear, and Twitter actually made querying the search service much easier than we expected. The Search API finds all mentions of whatever string of characters you designate, so instead of creating an elaborate Boolean search for "SoftLayer OR #SoftLayer OR @SoftLayer ..." or any number of combinations of arbitrary strings, we could simply search for "SoftLayer" and have all of those results included. If you want to see only @ replies or hashtags, you can limit your search to those alone, but because "SoftLayer" isn't a word that gets thrown around much without referencing us, we wanted to see every instance. This is the code we ended up working with for the search functionality:

def status_by_search(search):
    statuses = api.GetSearch(term=search)
    results = filter(lambda x: x.id > get_log_value(), statuses)
    returns = []
    if len(results) > 0:
        for result in results:
            returns.append(format_status(result))
 
        new_tweets(results)
        return returns, len(returns)
    else:
        exit()

If you walk through the script, you'll notice that we want to return only unseen Tweets to our email recipients. Shortly after got the Twitter Monitor up and running, we noticed how easy it would be to get spammed with the same messages every time the script ran, so we had to filter our results accordingly. Twitter's API allows you to request tweets with a Tweet ID greater than one that you specify, however when I tried designating that "oldest" Tweet ID, we had mixed results ... Whether due to my ignorance or a fault in the implementation, we were getting fewer results than we should. Tweet IDs are unique and numerically sequential, so they can be relied upon as much as datetime (and far easier to boot), so I decided to use the highest Tweet ID from each batch of processed messages to filter the next set of results. The script stores that Tweet ID and uses a little bit of logic to determine which Tweets are newer than the last Tweet reported.

def new_tweets(results):
    if get_log_value() < max(result.id for result in results):
        set_log_value(max(result.id for result in results))
        return True
 
 
def get_log_value():
    with open('tweet.id', 'r') as f:
        return int(f.read())
 
 
def set_log_value(messageId):
    with open('tweet.id', 'w+') as f:
        f.write(str(messageId))

Once we culled out our new Tweets, we needed our script to email those results to our social media team. Luckily, we didn't have to reinvent the wheel here, and we added a few lines that enabled us to send an HTML-formatted email over any SMTP server. One of the downsides of the script is that login credentials for your SMTP server are stored in plaintext, so if you can come up with another alternative that adds a layer of security to those credentials (or lets you send with different kinds of credentials) we'd love for you to share it.

From that point, we could run the script manually from the server (or a laptop for that matter), and an email digest would be sent with new Tweets. Because we wanted to automate that process, I added a cron job that would run the script at the desired interval. As a bonus, if the script doesn't find any new Tweets since the last time it was run, it doesn't send an email, so you won't get spammed by "0 Results" messages overnight.

The script has been in action for a couple of weeks now, and it has gotten our social media team's seal of approval. We've added a few features here and there (like adding the number of Tweets in an email to the email's subject line), and I've enlisted the help of Kevin Landreth to clean up the code a little. Now, we're ready to share the SoftLayer Twitter Monitor script with the world via GitHub!

SoftLayer Twitter Monitor on GitHub

The script should work well right out of the box in any Python environment with the required libraries after a few simple configuration changes:

  • Get your Twitter Customer Secret, Access Token and Access Secret from https://dev.twitter.com/
  • Copy/paste that information where noted in the script.
  • Update your search term(s).
  • Enter your mailserver address and port.
  • Enter your email account credentials if you aren't working with an open relay.
  • Set the self.from_ and self.to values to your preference.
  • Ensure all of the Python requirements are met.
  • Configure a cron job to run the script your desired interval. For example, if you want to send emails every 10 minutes: */10 * * * * <path to python> <path to script> 2>&1 /dev/null

As soon as you add your information, you should be in business. You'll have an in-house Twitter Monitor that delivers a simple email digest of your new Twitter mentions at whatever interval you specify!

Like any good open source project, we want the community's feedback on how it can be improved or other features we could incorporate. This script uses the Search API, but we're also starting to play around with the Stream API and SoftLayer Message Queue to make some even cooler tools to automate brand monitoring on Twitter.

If you end up using the script and liking it, send SoftLayer a shout-out via Twitter and share it with your friends!

-@SoftLayerDevs

September 12, 2012

How Can I Use SoftLayer Message Queue?

One of the biggest challenges developers run into when coding large, scalable systems is automating batch processes and distributing workloads to optimize compute resource usage. More simply, intra-application and inter-system communications tend to become a bottleneck that affect the user experience, and there is no easy way to get around it. Well ... There *was* no easy way around it.

Meet SoftLayer Message Queue.

As the name would suggest, Message Queue allows you to create one or more "queues" or containers which contain "messages" — strings of text that you can assign attributes to. The queues pass along messages in first-in-first-out order, and in doing so, they allow for parallel processing of high-volume workflows.

That all sounds pretty complex and "out there," but you might be surprised to learn that you're probably using a form of message queuing right now. Message queuing allows for discrete threads or applications to share information with one another without needing to be directly integrated or even operating concurrently. That functionality is at the heart of many of the most common operating systems and applications on the market.

What does it mean in a cloud computing context? Well, Message Queue facilitates more efficient interaction between different pieces of your application or independent software systems. The easiest way demonstrate how that happens is by sharing a quick example:

Creating a Video-Sharing Site

Let's say we have a mobile application providing the ability to upload video content to your website: sharevideoswith.phil. The problem we have is that our webserver and CMS can only share videos in a specific format from a specific location on a CDN. Transcoding the videos on the mobile device before it uploads proves to be far too taxing, what with all of the games left to complete from the last Humble Bundle release. Having the videos transcoded on our webserver would require a lot of time/funds/patience/knowledge, and we don't want to add infrastructure to our deployment for transcoding app servers, so we're faced with a conundrum. A conundrum that's pretty easily answered with Message Queue and SoftLayer's (free) video transcoding service.

What We Need

  • Our Video Site
  • The SoftLayer API Transcoding Service
  • SoftLayer Object Storage
    • A "New Videos" Container
    • A "Transcoded Videos" Container with CDN Enabled
  • SoftLayer Message Queue
    • "New Videos" Queue
    • "Transcoding Jobs" Queue

The Process

  1. Your user uploads the video to sharevideoswith.phil. Your web app creates a page for the video and populates the content with a "processing" message.
  2. The web application saves the video file into the "New Vidoes" container on object storage.
  3. When the video is saved into that container, it creates a new message in the "New Videos" message queue with the video file name as the body.
  4. From here, we have two worker functions. These workers work independently of each other and can be run at any comfortable interval via cron or any scheduling agent:
Worker One: Looks for messages in the "New Videos" message queue. If a message is found, Worker One transfers the video file to the SoftLayer Transcoding Service, starts the transcoding process and creates a message in the "Transcoding Jobs" message queue with the Job ID of the newly created transcoding job. Worker One then deletes the originating message from the "New Videos" message queue to prevent the process from happening again the next time Worker One runs.

Worker Two: Looks for messages in the "Transcoding Jobs" queue. If a message is found, Worker Two checks if the transcoding job is complete. If not, it does nothing with the message, and that message is be placed back into the queue for the next Worker Two to pick up and check. When Worker Two finds a completed job, the newly-transcoded video is pushed to the "Transcoded Videos" container on object storage, and Worker Two updates the page our web app created for the video to display an embedded media player using the CDN location for our transcoded video on object storage.

Each step in the process is handled by an independent component. This allows us to scale or substitute each piece as necessary without needing to refactor the other portions. As long as each piece receives and sends the expected message, its colleague components will keep doing their jobs.

Video transcoding is a simple use-case that shows some of the capabilities of Message Queue. If you check out the Message Queue page on our website, you can see a few other examples — from online banking to real-time stock, score and weather services.

Message Queue leverages Cloudant as the highly scalable low latency data layer for storing and distributing messages, and SoftLayer customers get their first 100,000 messages free every month (with additional messages priced at $0.01 for every 10,000).

What are you waiting for? Go get started with Message Queue!

-Phil (@SoftLayerDevs)

September 10, 2012

Creating a Usable, Memorable and Secure Password

When I was young, I vividly remember a wise man sharing a proverb with me: "Locks are for honest people." The memory is so vivid because it completely confused me ... "If everyone was honest, there would be no need for locks," I thought, naively. As it turns out, everyone isn't honest, and if "locks keep honest people honest," they don't do anything to/for dishonest people. That paradox lingered in the back of my mind, and a few years later, I found myself using some sideways logic to justify learning the mechanics of lock picking.

I ordered my first set of lock picks (with instruction booklet) for around $10 online. When the package arrived, I scrambled to unwrap it like Ralphie unwrapped the "Red Ryder" BB gun in "A Christmas Story," and I set out to find my first lock to pick. After a few unsuccessful attempts, I turned to the previously discarded instruction booklet, and I sat down to actually learn what I was supposed to be doing. That bit of study wound up being useful; with that knowledge, I managed to pick my first lock.

I tend to collect hobbies. I also tend to shift every spare thought towards my newest obsession until whatever goal I set is accomplished. To this end, I put together a mobile lock-picking training device — the cylinder/tumbler from a dead bolt, my torq wrench wrapped with electrical tape to prevent the recurrence of blisters, and my favorite snake rake. I took this device with me everywhere, unconsciously unlocking and resetting the lock as I went about my shopping, sat in a doctor's office or walked around the block. In my mind, I was honing my skills on a mechanical challenge, but as one of my friends let me know, people who saw me playing with the lock in public would stare at me like I was a budding burglar audaciously flaunting his trade.

I spent less money on a lock picking set than I would have on a lock, and I felt like had a key to open any door. The only thing between me and the other side of a locked door in front of me was my honesty. What about the dishonest people in the world, though? They have the same access to cheap tools, and while they probably don't practice their burgling in public, can spend just as much time sharpening their skills in private. From then on, I was much more aware of the kinds of locks I bought and used to secure my valuables.

When I started getting involved in technology, I immediately noticed the similarities between physical security and digital security. When I was growing up, NBC public service announcements taught me, "Knowledge is Power," and that's even truer now than it was then. We trust technology with our information, and if someone else gets access to that information, the results can be catastrophic.

Online, the most common "hacks" and security exploits are usually easily avoidable. They're the IRL equivalent of leaving valuables on a table by an unlocked window with the thought, "The window is closed ... My stuff is secure." Some of those windows may be hard to reach, but some of them are street-level in high-traffic pedestrian areas. The most vulnerable and visible of access points: Passwords.

You've heard people tell you not to do silly things like making "1 2 3 4 5" your combination lock, and your IT team has probably gotten onto you about using "password" to log onto your company's domain, but our tendency to create simpler passwords is a response to the inherent problem that a secure password is, by its nature, hard to remember. The average Internet user probably isn't going to use pwgen or a password lockbox ... If you had a list of passwords from a given site, my guess is that you'd wind up seeing a lot more pets' names and birth years than passwords like S0L@Y#Rpr!Vcl0udN)3mblyR#Q. What people need to understand is that the "secure" password can be just as easy to remember as "Fluffy1982."

Making a *Usable* Secure Password

The process of creating a unique, usable and secure password is pretty straightforward:

  1. Start with a series of words or phrases which have a meaning to you: A quote in a movie, song lyric, title of your favorite book series, etc. For our example, let's use "SoftLayer Private Clouds, no assembly required."
  2. l33t up your phrase. To do this, you'd remove punctuation and spaces, and you'd replace a letter in the phrase with a special character. You predetermining these conversions to create a template of alterations to any string which only take minimal thought from you. In the simplest of cyphers, letters become a numbers or characters that resemble the letter: An "o" becomes a "0," "e" becomes a "3," an "a" becomes an "@," etc. In more complicated structures, a character can be different based on where it lies in the string or what less-commmon substitutions you choose to use. Our example at this point would look like this: "S0ftL@y3rPr1v@t3Cl0udsn0@ss3mblyr3qu1r3d"
  3. Right now, we have a password that would make any brute-forcing script-kiddie yearn for the Schwarts, but we're not done yet. If someone were to find our cypher and personal phrase, they may be able to figure out our password. Also, this password is too long for use in many sites with password restrictions that cap you a 16 characters. Our goal is to create a password between 15-25 characters and be prepared to make cuts when necessary.
  4. A good practice is to cut out the beginning or ending of a word. In our example (taking out the l33t substitutions for simplicity here), our phrase might look like this: "so-layer-priv-cloud-no-embly-req"
  5. When we combine the shortened password with l33t substitutions, the last trick we want to incorporate is using our Shift key. An "e" might be a "3" in a simple l33t cypher, but if we use the Shift key, the "e" becomes a "#" (Shift+"3"): "S0L@Y#Rpr!Vcl0udN)#mblyR#Q"

The main idea is that when you're "locking" your accounts with a password, you don't need the most complicated lock ever created ... You just need one that can't be picked easily. Establish a pattern of uncommon substitutions that you can use consistently across all of your sites, and you'll be able to use seemingly common phrases like "Fluffy is my dog's name" or "Neil Armstrong was an astronaut" without worrying about anyone being able to "open your window."

-Phil (@SoftLayerDevs)

August 21, 2012

High Performance Computing - GPU v. CPU

Sometimes, technical conversations can sound like people are just making up tech-sounding words and acronyms: "If you want HPC to handle Gigaflops of computational operations, you probably need to supplement your server's CPU and RAM with a GPU or two." It's like hearing a shady auto mechanic talk about replacing gaskets on double overhead flange valves or hearing Chris Farley (in Tommy Boy) explain that he was "just checking the specs on the endline for the rotary girder" ... You don't know exactly what they're talking about, but you're pretty sure they're lying.

When we talk about high performance computing (HPC), a natural tendency is to go straight into technical specifications and acronyms, but that makes the learning curve steeper for people who are trying to understand why a solution is better suited for certain types of workloads than technology they are already familiar with. With that in mind, I thought I'd share a quick explanation of graphics processing units (GPUs) in the context of central processing units (CPUs).

The first thing that usually confuses people about GPUs is the name: "Why do I need a graphics processing unit on a server? I don't need to render the visual textures from Crysis on my database server ... A GPU is not going to benefit me." It's true that you don't need cutting-edge graphics on your server, but a GPU's power isn't limited to "graphics" operations. The "graphics" part of the name reflects the original intention for kind of processing GPUs perform, but in the last ten years or so, developers and engineers have come to adapt the processing power for more general-purpose computing power.

GPUs were designed in a highly parallel structure that allows large blocks of data to be processed at one time — similar computations are being made on data at the same time (rather than in order). If you assigned the task of rendering a 3D environment to a CPU, it would slow to a crawl — it handles requests more linearly. Because GPUs are better at performing repetitive tasks on large blocks of data than CPUs, you start see the benefit of enlisting a GPU in a server environment.

The Folding@home project and bitcoin mining are two of the most visible distributed computing projects that GPUs are accelerating, and they're perfect examples of workloads made exponentially faster with the parallel processing power of graphics processing units. You don't need to be folding protein or completing a blockchain to get the performance benefits, though; if you are taxing your CPUs with repetitive compute tasks, a GPU could make your life a lot easier.

If that still doesn't make sense, I'll turn the floor over to the Mythbusters in a presentation for our friends at NVIDIA:

SoftLayer uses NVIDIA Tesla GPUs in our high performance computing servers, so developers can use "Compute Unified Device Architecture" (CUDA) to easily take advantage of their GPU's capabilities.

Hopefully, this quick rundown is helpful in demystifying the "technobabble" about GPUs and HPC ... As a quick test, see if this sentence makes more sense now than it did when you started this blog: "If you want HPC to handle Gigaflops of computational operations, you probably need to supplement your server's CPU and RAM with a GPU or two."

-Phil

May 10, 2012

The SoftLayer API and its 'Star Wars' Sibling

When I present about the SoftLayer API at conferences and meetups, I often use an image that shows how many of the different services in the API are interrelated and connected. As I started building the visual piece of my presentation, I noticed a curious "coincidence" about the layout of the visualization:

SoftLayer API Visualization

What does that look like to you?

You might need to squint your eyes and tilt your head or "look beyond the image" like it's one of those "Magic Eye" pictures, but if you're a geek like me, you can't help but notice a striking resemblance to one of the most iconic images from Star Wars:

SoftLayer API == Death Star?

The SoftLayer API looks like the Death Star.

The similarity is undeniable ... The question is whether that resemblance is coincidental or whether it tells us we can extrapolate some kind of fuller meaning as in light of the visible similarities. I can hear KHazzy now ... "Phil, While that's worth a chuckle and all, there is no way you can actually draw a relevant parallel between the SoftLayer API and The Death Star." While Alderaan may be far too remote for an effective demonstration, this task is no match for the power of the Phil-side.

Challenge Accepted.

The Death Star: A large space station constructed by the Galactic Empire equipped with a super-laser capable of destroying an entire planet.

The SoftLayer API: A robust set of services and methods which provide programmatic access to all portions of the SoftLayer Platform capable of automating any task: administrative, configuration or otherwise.

Each is the incredible result of innovation and design. The construction of the Death Star and creation of the SoftLayer API took years of hard work and a significant investment. Both are massive in scale, and they're both effective and ruthless when completing their objectives.

The most important distinction: The Death Star was made to destroy while the SoftLayer API was made to create ... The Death Star was designed to subjugate a resistance force and destroy anything in the empire's way. The SoftLayer API was designed to help customers create a unified, automated way of managing infrastructure; though in the process, admittedly that "creation" often involves subjugating redundant, compulsory tasks.

The Death Star and the SoftLayer API can both seem pretty daunting. It can be hard to find exactly what you need to solve all of your problems ... Whether that be an exhaust port or your first API call. Fear not, for I will be with you during your journey, and unlike Obi-Wan Kenobi, I'm not your only hope. There is no need for rebel spies to acquire the schematics for the API ... We publish them openly at sldn.softlayer.com, and we encourage our customers to break the API down into the pieces of functionality they need.

-Phil (@SoftLayerDevs)

August 16, 2011

SLDN 2.0 - The Development Network Evolved

SoftLayer is in a constant state of change ... It's not that bad change we all fear; it's the type of change that allows you to stretch the boundaries of your normal experience and run like a penguin ... Because I got some strange looks when coworkers read "run like a penguin," I should explain that I recently visited Moody Gardens in Galveston and saw penguins get crazy excited when they were about to get fed, so that's the best visual I could come up with. Since I enjoy a challenge (and enjoy running around like a penguin), when I was asked to design the new version of SLDN, I was excited.

The goal was simple: Take our already amazing documentation software infrastructure and make it better. A large part of this was to collapse our multi-site approach down into a single unified user experience. Somewhere along the way, "When is the proposal going to be ready?" became "When is the site going to be ready?", at this point I realized that all of the hurdles I had been trampling over in my cerebral site building were now still there, standing, waiting for me on my second lap.

I recently had the honor to present our ideas, philosophy and share some insight into the technical details of the site at OSCON 2011, and KHazzy had the forethought to record it for all of you!

It's a difficult balance to provide details and not bore the audience with tech specs, so I tried to keep the presentation relatively light to encourage attendees (and now viewers) to ask questions about areas they want a little more information about. If you're looking at a similar project in the future, feel free to bounce ideas off me, and I'll steer you clear of a few land mines I happened upon.

-Phil

April 7, 2011

Thou Shalt Transcode

Deep in the depths of an ancient tomb of the great Abswalli, you and your team accidentally awaken the Terbshianaki ghost army. You’re disconnected from the supply caravan with the valuable resources that could not only sustain your journey but also save your team. As Zeliagh the Protesiann hunter fires his last arrow, you come to the sudden realization that continuing your quest is now hopeless. Alas, true terror was unknown before this moment as you come to the most surprising realization: The one thing you truly can't live without is your trusty server that converts one type of media into another.

Fear not great adventurer, for I, Phil of the SLAPI, have come, and I bear the gifts of empowerment, automation and integration. Freedom from the horror of your epiphany can be found in our complementary media transcoding service.
Before we can begin, some preparation is required. First, you must venture to our customer portal and create a transcoding user: Private Network->Transcoding. As you know from the use of your other SoftLayer spoils, you shan't be obligated to access this functionality from your web browser. You can summon the API wizardry bequeathed to you by coders of old in the the SLDN scroll: SoftLayer_Network_Media_Transcode_Account::createTranscodeAccount.*

*For the sake of this blog, we'll abbreviate "SoftLayer_Network_Media_Transcode_Account" as "SNMTA" from here forward ... Shortening it helps with blog formatting.

You must then construct an object to represent a SoftLayer Network Media Transcode Job, like our SoftLayer Network Media Transcode Job template object. This template object will be built with a number of properties. Your pursuit in relieving your aforementioned horror only necessitates the use of the required properties.

You will need to decide in which format the final treasure will take form. You may find this information with the SNMTA::getPresets method.

$client = SoftLayer_SoapClient::getClient('SoftLayer_Network_Media_Transcode_Account', $trandcodeAccountId, $apiUsername, $apiKey);
$transcodePresets = $client->getPresets();
print_r($transcodePresets);
Array
(
    [0] => stdClass Object
        (
            [GUID] => {9C3716B9-C931-4873-9FD1-03A17B0D3350}
            [category] => Application Specific
            [description] => MPEG2, Roku playback, 1920 x 1080, Interlaced, 29.97fps, 25Mbps, used with Component/VGA connection.
            [name] => MPEG2 - Roku - 1080i
        )
 
    [1] => stdClass Object
        (
            [GUID] => {03E81152-2A74-4FF3-BAD9-D1FF29973032}
            [category] => Application Specific
            [description] => MPEG2, Roku playback, 720 x 480, 29.97fps, 6.9Mbps, used with Component/S-Video connection.
            [name] => MPEG2 - Roku - 480i
        )
 
    [2] => stdClass Object
        (
            [GUID] => {75A264DB-7FBD-4976-A422-14FBB7950BD1}
            [category] => Application Specific
            [description] => MPEG2, Roku playback, 720 x 480, Progressive, 29.97fps, 6.9Mbps, used with Component/VGA connection.
            [name] => MPEG2 - Roku - 480p
        )
.....

The freedom to use this power (the more you know!) is yours, in this instance, I scrolled through let my intuition find the option which just felt right:

stdClass Object
(
            [GUID] => {21A33980-5D78-4010-B4EB-6EF15F5CD69F}
            [category] => Web\Flash
            [description] =>
            [name] => FLV 1296kbps 640x480 4x3 29.97fps
        )

To decipher this language we must know the following:

  1. The GUID is the unique identifier which we will use to reference our champion
  2. The category section is used to group like presets together, this will be useful for those who's journey leads down the path of GUI creation
  3. A description of the preset, if one is available, will be listed under description
  4. name is simply a human-readable name for our preset

You are nearly ready to restore your yearned for transcoding service as the ghostly horde presses the defensive perimeter. We have but one more task of preparation: We must provide the transcoding service a file! Using your Wand of File Transference +3, or your favorite FTP client, you enter the details for your transcode FTP account found on the Transcoding page of the IMS (or of course SNMTA::getFtpAttributes) and choose the "in" directory as the destination for your source file. Lacking no other option, you may call upon Sheshura, a fairy sprite, specializing in arcane documents for a source video file: Epic Battle

The battle rages around you, as the Wahwatarian mercenaries protect your flank. The clicking of your laptop keys twist and merge in the air around your ears only to transcend into a booming chorus of "The Flight of the Valkyries" as you near transcoding Utopia. You strike:

<?php
//  Create a transcoding client
$client = SoftLayer_SoapClient::getClient('SoftLayer_Network_Media_Transcode_Job', null, $apiUsername, $apiKey);
 
// Define our preset GUID and filename
$presetGUID = '{95861D24-9DF5-405E-A130-A40C6637332D}';
$inputFile = 'video.mov';
 
/*
 * The transcoding service will append the new file extension to the output file
 * so we strip the extension here.
 */
$outputFile = substr($inputFile, 0, strrpos($inputFile, '.'));
 
try {
    // Create a SoftLayer_Network_Media_Transcode_Job template object with the required properties
    $transcodeJob = new stdClass();
    $transcodeJob->transcodePresetGuid = $presetGUID;
    $transcodeJob->inputFile = "/in/$inputFile";
    $transcodeJob->outputFile = "/out/$outputFile";
 
    // Call createObject() with our template object as a parameter
    $result = $client->createObject($transcodeJob);
    // $result will contain a SoftLayer_Network_Media_Transcode_Job object
    print_r($result);
} catch ( Exception $e) {
    die( $e->getMessage());
}

If your will did not waver nor did your focus break in the face of ever-closing ghouls pounding your resolve, your treasure will be waiting. Brandish your Wand of File Transference +3, or utilize your favorite FTP client to retrieve your reward: "out/video.flv"

If the gods be with thee, your resulting file should look like this: Epic Battle (in .flv)

With your victory fresh upon the tablets of history, you can now encode to any of our supported formats. Try using the process above to convert the video to .mp4 format so your resulting file output is Epic Battle (in .mp4)!

-Phil

P.S. If you're going to take off your training wheels, the second example uses "[description] => MPEG4 file, 320x240, 15fps, 256kbps for download" for the bandwidth impaired.

January 12, 2011

'What\'s with These "Quote" Things?'

'We\'ve' . "all $een" . 'this' . $problem . 'before' . $and->it . ((1==1) ? 'seems' : 'dosen\'t seem') . sprintf('about time to %s things', 'clarify');

PHP string handling can be a tough concept to wrangle. Developers have many options: single / double quotes, concatenation and various string manipulation functions. The choices you make have a significant impact on the readability and performance of your script. Let's meet the line-up:

The Literal
Single quotes are used to define a string whose contents should be taken literally. What this means is that PHP will not attempt to expand any content contained between the ' '.

This is the way to tell your favorite Hypertext Preprocessor, "That little guy? Don't worry about that little guy."

In most cases this is the de-facto standard for strings. However, when a decent number of variables become involved it tends to become difficult to keep your quotes accounted for. When combining simple strings with variables and single quotes, the "." operator is needed between each variable/string. That "." is known as the concatenation operator.

Input:
$date = 'Yesterday';
$location = 'outside';
$item = array ( 'description' => 'lovely', 'name' => 'butterfly');
$content = $date . ' I went ' . $location . ' and caught a ' . $item['description'] . ' ' . $item['name'];

Output: Yesterday I went outside and caught a lovely butterfly

The Interpreted
Using double quotes will cause PHP to look a little closer into the string to find anywhere it can "read between the lines." Variables and escape characters will be expanded, so you can reference them inline without the need for concatenation. This can be useful when creating strings which include pre-defined variables.

Input:
$file = 'example.jpg'
$content = "<a href=\"http://www.example.com/$file\">$file</a>"

Output: <a href="http://www.example.com/example.jpg">example.jpg</a>

In previous versions of PHP there was a significant performance difference between the use of single v. double quotes. In later versions performance variations are negligible. The decision of one over the other should focus on feature and readability concerns.

The Thoughtful
Unlike single and double quotes, the sprintf function comes to the table with a few cards up its sleeve. When provided with a formatting "template" and arguments, sprintf will return a formatted string.

Input:
$order = array ( 'item' => 'RC Helicopters', 'status' => 'pending');
$content = sprintf('Your order of %s is currently %s', $order['item'], $order['status']);

Output: Your order of RC Helicopters is currently pending

When constructing a complex string such as XML documents, sprintf allows the developer to view the string with placeholders rather than a mish-mash of escaped quotes and variables. In addition sprintf is able to specify the type of variable, change padding/text alignment, and even change the order in which it displays the variables.

The debate over the most efficient method of string definition has raged for years and will likely continue ad infinitum. However, when the benchmarks show their performance as almost identical, it leaves you with one major question: What works the best for your implementation? Typically my scripts will contain all of the methods above, and often a combination of them.

print(sprintf('The %s important thing is that %s give them all a try and see for %s', 'most', 'you', 'yourself'));

-Phil

Subscribe to Author Archive: %