Sunday, January 15, 2012

Sourcefabric, the Open Source Newsroom

The non-profit Sourcefabric builds digital open source newsrooms to support quality, independent journalism. Traditional news organizations have taken a major beating from the Internet, but the Internet has also created opportunities for a free press in countries that have never had one before, and Sourcefabric is part of journalism's path to the future.

Two of my favorite quotations are "Freedom of the press belongs to those who own one" (A.J. Liebling), and "Money changes everything" (Cyndi Lauper and Tom Gray).

Tie these together with the Golden Rule, "the one with the gold makes the rules" and the struggles and turmoil of modern journalism come into sharp focus. It's a different game now as print news has drastically declined, TV news is limping along in a stale 40-year old format and inexplicably tries to replicate this moldy experience on the Web, and we get our news from a multitude of non-journalism sources like Facebook, YouTube, and Twitter.

Online news publishing should be a great boon because the cost of distribution is low, and the reach is Internet-wide. But the business model is difficult because users don't pay; it is primarily advertiser-supported and enslaved to SEO (search engine optimization) voodoo, which means Google is the giant tail wagging the publishing industry dog. News publications get paid for driving traffic, rather than for publishing good quality material.

Change is a multi-edged device, and while the Internet and high tech have damaged traditional news organizations (think buggy whips), they have also lowered the barriers to entry and widened journalism's scope. Digital photographs, videos, and audio streams are considerably easier to edit, produce, and distribute than in the olden ways. Journalists are truly mobile and news publishers are not tied to a physical location, or to large investments in printing presses and broadcast studios.

So it's a different world now, and potentially a better one because independent and non-profit news organizations can compete with the establishment mainstream press (what's left of it), and fill niches that the traditional news organizations are not interested in. Like working weekends and holidays, breaking news as soon as it's ready in multiple formats and media, and covering important topics that may not please advertisers but are of interest to readers and viewers.

Real journalism is more than sitting down to a WordPress blog and emitting deep thoughts. It is a skilled profession, and it has always been a technology-dependent business. It may not be apparent to the reader, viewer, or listener of news, but putting a story together and then publishing it is a complicated, labor-intensive process. This is where Sourcefabric comes in.

Sourcefabric is still a young project. You may have heard of the Campware (Center for Advanced Media-Prague) suite of news and content management software; Campware was created in 2005, and then spun off as an independent organization, Sourcefabric, in April 2010. Campware was originally created by the Media Development Loan Fund. The MDLF has been around since 1995, a "a mission-driven investment fund for independent news outlets in countries with a history of media oppression."

Sourcefabric provides software, hosting services, and training. Currently there are three main software applications: Airtime, Newscoop, and Superdesk. Each one plays a different role in gathering and publishing news, so let's take them for a spin.

Airtime is for managing an Internet radio station. It runs on Apache, PostgreSQL, PHP, the RabbitMQ messaging system, and a whole lot of other good FOSS. It has a one-click installation on Debian and Ubuntu, and can be installed on pretty much any Linux distro. You can also try out a live demo, which is an excellent way to get acquainted with Airtime. You can upload your own audio, create playlists, and schedule and listen to your own programming. Its browser-based interface lets you control it from any computer anywhere (figure 1).

Figure 1: A new Airtime installation waiting for audio files and scheduling.

Airtime supports streams from your own server, and also Icecast and Shoutcast, and has an fallback function that switches to a live stream if it fails for any reason. You can play MP3 or Ogg files, customize the interface, and add the Mixxx live DJ software for additional functionality like managing live broadcasts. Airtime is the most mature Sourcefabric application, and the average Linux nerd won't have any trouble getting it up and running.

The hardest part is learning audio production if you don't already know how to do that: how to record, edit, and optimize audio streams for broadcast. There are other great FOSS applications for this like Audacity and Ardour.

Airtime has nice user management; you can allocate time slots to different people and then they manage their own slots. It has some JQuery widgets that automatically update a Web calendar for your users, and creates searchable archives.

Airtime is a free download, and the Pro version adds hosting and support, ranging from $50 to $500. You can also purchase custom services if the prefab deals don't meet your needs.

Newscoop is an open content management system specialized for journalists and newspapers. Like Airtime, it runs on a LAMP stack and requires a bit of work to install and configure. Here be good detailed installation instructions.

Newscoop manages the online text article publication process, and has a modular dashboard (figure 2) that shows scheduling, article status, and other information on one page. It follows the traditional print publishing layout model, with different sections (News, Arts, Politics, and such), and modern features such as commenting, RSS feeds, multimedia, and subscription services.

Figure 2: The Newscoop Dashboard displays all the publication status information you want in one view.

It has a caching module to speed up page delivery, and it automatically tailors the presentation for different browsers on different devices. It includes SEO (search engine optimization) tools, multi-language support, and geolocation support for customizing content delivery by location. Newscoop has support for OpenStreetMap, Google Maps, and Mapquest Open. It is theme-able, and has a live demo to play with.

Airtime and Newscoop manage online broadcasting and publication. Superdesk manages the story creation workflow. A story might pass through several hands before it's ready for publication: assignment editor, reporter, a second contributor, photographer, copy editor, editor.

Superdesk has a customizable workflow for tracking and editing content. It outputs content in formats that are compatible with external design software such as Adobe InDesign and InCopy. It uses calendars and notifications so you can store information for future stories, and then be reminded when the time comes. Superdesk pulls together news from RSS feeds, social network sites, and wherever else you want. And it has business tools such as sales trackers, classified ad builder, and order tracking. Superdesk is scheduled to be released sometime in 2012, and you can see the source code at the Sourcefabric Wiki (free registration required.)

The essence of journalism has not changed: gathering and reporting news. Sourcefabric provides modern tools for independent journalism, and free/open source software makes projects like Sourcefabric possible.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

10 Most Popular Linux.com Stories in 2011

Here it is, 2011 is almost at an end. As we get ready for 2012, we wanted to take a look at the stories that were most popular with Linux.com readers in 2011. We've pulled together a list of the 10 most popular stories from the year for you to enjoy over the holidays.

To get this year's top 10, we looked at our site statistics and pulled the most popular original pieces published in 2011. The winners were not too surprising, but do help us figure out what kind of stories to line up for next year. Without further ado, we present the 10 most popular pieces published in 2011.

Earlier this month, we took a look at the 10 most important open source projects of 2011. It looks like Linux.com readers were quite interested in the results.

Another popular post from 2011 was Brian Proffitt's look at the best 7 Linux distributions for desktops, laptops, enterprise, and more.

We also looked at the best netbook Linux distributions from 2011. Though netbooks weren't quite the hot property they were in previous years, lots of Linux folks seem to love their netbooks.

Choice is one of the things that makes the Linux desktop great. Our round-up of alternative window managers for Linux was a huge favorite when we published it early in 2011. Long live FVWM!

Nathan Willis took a good, long look at GNOME 3.0 in March. Though Willis wasn't entirely pleased with what he found, Linux.com readers seemed to be deeply interested in the new GNOME.

SSH is one of the most-used tools on Linux, so it's little surprise that a piece on OpenSSH tips and tricks cracked the top 10 for 2011. I still use the FUSE trick for mounting remote filesystems on a regular basis.

The Fedora is strong with the Linux.com community. Carla Schroder's Fedora 16 preview drew quite a bit of attention when we published it in early November. That might have something to do with the slew of new technologies and refinements (like GNOME 3.2) that were shipped with F16.

Another winner from Carla, a IPv6 "crash course" published in April. Carla took a look at the advantages of IPv6 and some tips and tricks for working with IPv6 on Linux.

Calendar clients are easy to find on Linux. Calendar servers? That's a bit trickier, but Nathan Willis gets it all sorted out. Can you believe a FOSS project from Apple makes the list?

The first new version of LibreOffice shipped in early 2011, and Brian was there to put it through its paces. LibreOffice has come a long way since then, of course.

There you have it, the most popular stories of 2010. What about 2012? Tell us what kind of stories you'd like to see in 2012, and beyond.

By the way, if you're curious about last year's top 10, check out the top 10 from 2010 too.

Comments (1)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

10 Best Linux.com Tutorials of 2011

One of my resolutions for the New Year is usually “learn more” about things I’m interested in. Of course, that includes Linux. If you’re looking to learn more about Linux this year or next, you might want to check out some of the tutorials that we published throughout 2011.

Every year, we try to line up topics that will be useful to the Linux.com audience. That includes new users who’ve never touched a command line before, as well as expert users who have a lot of experience under their belts. Now, in no particular order, I present some of the best tutorials from 2011. Enjoy!

Nathan Willis provides a little advocacy for his favorite editor, GNU Emacs. The GNU Emacs 101 explains why users would want to try Emacs, and gives basics to get you started.

Carla Schroder explains how to take advantage of “one of the best features of virtualization” with Linux KVM, how to do live and offline migrations with KVM.

As Carla writes, “we had a System V (SysV) type init daemon to manage Linux system startup, and it was good. It was configured with simple text files easily understood by mortals, and it was a friendly constant amid the roiling seas of change. Then came systemd, and once again we Linux users were cast adrift in uncharted waters. Why all this change? Can’t Linux hold still for just a minute?”

No, no it can’t. Sorry. But at least we have a great intro to systemd from Carla to help deal with the new init hotness.

You might use GNU grep all the time, but glark is an alternative that may be better for you in some situations. If you’d like to learn more about glark, we’ve got a short tutorial on glark to get you started.

Networking problems can just ruin your day. Whose fault is it? Carla provides a tutorial on figuring out where network problems lie.

In a follow-up to the troubleshooting tutorial, Carla digs into figuring out who is on your network using Linux tools. This includes popular utilities like Nmap, fping, iperf, and more.

Don’t want a “fancy GUI” to manage your task list? You’re not alone. But what if you want to sync your CLI app with your Android device? In that case, take a look at Nathan’s tutorial on using todo.txt in conjunction with Android.

These days, there are a lot of reasons for users to want to be able to browse the Internet anonymously. Nathan provides a really good tutorial on browsing anonymously with Tor on Linux.

As a follow-up to the first Tor piece, Nathan provided a second tutorial on running invisible services and bridges for those who need more privacy than Tor does by default.

They say that those who don’t know history are doomed to repeat it — but if you don’t know how to use your history in Bash, you’re just not making the most of your system. If you want to learn how to use your history, we have five tips for working with GNU Bash history on Linux.

Have some topics you’d like to see covered in 2012? Let us know in the comments!

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

What's New in Linux 3.2?

What better way to kick off the new year than with a brand new kernel, fresh out of Kernel.org? Linus Torvalds released the 3.2 kernel on January 4th, with improvements in the Ext4 and Btrfs filesystems, thin provisioning features, a new architecture, and CPU bandwidth control.

The last major kernel release was in late October of last year. That release, if you recall, included support for OpenRISC, Near-Field Communications, and the cpupowerutils project.

Last week, Torvalds pushed 3.2 out into the world with a pretty short announcement and opened the merge window for Linux 3.3. Naturally, this release has the usual fixes and new drivers, but also some notable new features that are worth taking a look at.

Let’s face it, users have an insatiable desire for more and more storage. Which means larger and larger hard drives. Unfortunately, the maximum filesystem block for Ext4 systems has been stuck at 4KB, which is a bit of an inconvenience for users who are working primarily with larger files. With the 3.2 release, users can increase the block size to a maximum of 1MB.

Btrfs has seen a fair number of small improvements with 3.2 as well. For example, Btrfs now gives more detailed messages when it encounters bad blocks or other errors. In addition, you can now do a manual inspection of the filesystem, so you can query Btrfs about what files belong to bad blocks.

Linux adding a new architecture is not unusual, but the Hexagon Processor from Qualcomm is a bit different. Most of the CPUs supported by Linux are like the x86/AMD64 architecture that most of us use in our desktop/laptop machines or servers. (“Like,” in that they are for a wide range of general purpose machines.)

The Hexagon is a “general-purpose digital signal processor designed for high performance and low power.” It can be used for things like processing video, or could be used for the OS and digital signal processing. You probably won’t be running Linux Mint on this one anytime soon, but it might be in your next set-top box or something else that requires a lot of processing power for media but not general purpose computing.

On the TCP side, Google really wants to get search results to you a bit faster. So much so, in fact, that they have developed a better packet recovery algorithm for TCP. Google’s algorithm, “Proportional Rate Reduction,” is meant to improve latency.

As time goes on, the Linux kernel just gets more and more flexible when it comes to fine-tuning resource control. The 3.2 kernel has two notable features that will be very useful in this regard.

The first is the CPU bandwidth control, which allows admins to specify how much CPU time a process group can use in a period of time. For example, users could use the scheduling features to limit a group to N CPUs runtime in a specific period (measured in milliseconds), or limit groups to a fraction of a CPU. When the group hits its limit, it’s scaled back until the next time period starts.

For instance, you could give a time period of 1000ms and a quota of 1000ms. That would give a group 1 CPU worth of runtime. Or you could give the group a quota of 100ms with a time period of 1000ms, which would limit the group to 1/10th of a CPU runtime.

The second feature allows over-provisioning of storage so that space isn’t wasted. Wait, what? How is over-provisioning a good thing?

Imagine you have a system with 500 users given a storage quota of some arbitrary amount, like 15GB. You give each user the maximum amount of storage you think is reasonable or necessary, but the odds are in many cases that users will only use a fraction of the storage. For instance, in a lot of Web hosting scenarios you might provision users with 10GB of storage but in reality many users are only going to use a few hundred MB, not 10GB.

The thin provisioning features added to the Linux Device Mapper allow admins to over-provision, so you don’t have to have enough storage to cover the maximum storage scenario. This allows companies to avoid spending a lot of money on storage that they don’t need – an especially nifty feature now, considering the rising prices of hard drives.

Kernel development never sleeps, so while 3.2 is making its way out into the world kernel folks are busily hacking on 3.3 and beyond.

One of the more interesting features that we might see in 3.3 (remember, no promises) is a lot of Android integration. Greg Kroah-Hartman wrote in mid-December that “the next linux-next Linux kernel release should almost boot an Android userspace, we are only missing one piece, ashmem, and that should hopefully land in my staging-next tree next week. The patches are still being tested and cleaned up by others… hopefully, by the 3.3 kernel release, the majority of the Android code will be merged.”

That won’t mean a complete Android kernel in mainline Linux. Hartman says that there’s still more work to do, but it’s significant progress nonetheless.

By the way, if you’re into “ancient” kernels, Kroah-Hartman also released the 2.6.32.53 kernel on Friday, January 6th.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

Tone Your Photos in a Hurry with Delaboratory

Can you do high-quality photo correction with an application that is simple enough for normal (meaning non-photo-gearhead) people to use? Jacek Poplawski thinks so, and that's why he created Delaboratory, a point-and-click photo adjustment app that does not require a lifetime's worth of darkroom experience to understand.

Delaboratory fills an unusual niche, because it is easy-to-use, but it excels at a task most casual users don't particularly pay much attention to. Intrigued? You can test drive Delaboratory for yourself; visit the project's Google Code site and you will find binary packages for Debian, Ubuntu, openSUSE, Arch Linux, and Windows, plus source code bundles you can compile for other distributions. The current release is version 0.5, which landed in October.

There are certainly other "lightweight" photo editors on the Linux landscape — most of the simple image-managers like F-Spot offer some basic adjustment controls these days. But Delaboratory gets its edge by performing its correction operations in ultra-high, 32-bit-per-channel mode, and letting you use Lab color.

What that means is that all of the pixel-blending math happens in high-bit-depth data structures where even multiple correction steps will not cause round-off errors, and in a color-space that treats brightness and color separately. In contrast, using the standard 8-bit-per-channel RGB editing space usually results in clipping, unsightly bands of color, and muddy-looking tones — more so the more corrections you make.

Yet Delaboratory also differs from your typical photographer's utility because it employs a "keep it simple" approach. That does not mean there is a shortage of tools (although there are fewer options than you will find in a high-end adjustment tool like RawTherapee or Rawstudio). Instead, Delaboratory gets its simplicity by focusing on what Poplawski considers the crucial step in producing good photos — color correction — and by employing subtle UI conventions, like offering simplified controls, and treating all of the operations you use like Photoshop-style "adjustment layers."

Fire up Delaboratory for the first time, and you will see a stripped-down looking editing window. There is no directory browser or thumbnail "filmstrip;" you open one image at a time. A basic histogram sits on the top right, below it the stack of layers, and at the bottom an array of text-labeled buttons. Currently the tool set includes a curves editor, a channel mixer (which lets you adjust the proportion of each color channel in the image), a blur tool, an unsharp mask tool, and a suite of color-space converters. Click on any tool button, and it pops into place on the layer stack. A checkbox next to it controls whether it is active or inactive, and two buttons allow you to tweak the layer's effect — one for the tool's settings, and one for the blending mode.

When you click on one of the tool settings buttons, a dialog pops up. You can make changes to the sliders or curves and see the results instantly, but one of the confusing aspects of this setup is that the application allows you to keep several of these dialogs open at once. They are unlabeled, so if you have two "curves" applied — say, one for luminosity, and one for color — it is easy to lose track of which is which.

Assuming you don't get confused, you can add conversions, adjustment curves, blurs, or other operations, and they stack up on top of each other. Whenever you want to, you can click on a lower level in the stack, and see each layer's effects one at a time. There is no need for undo/redo or a "history" list because all of the changes are reversible. Delaboratory works this bit of magic by only applying the effect to a preview; the final operations are not done until file export time. If you are used to image operations permanently and immediately altering the file, this takes some getting used to, but it is a change for the better.

Delaboratory ScreenshotThe controls themselves are well-implemented, which is not something you notice every day. The difference is in how the sliders and histograms are labeled with what they control. A lot of image adjustment applications show curve graphs with the words "red" "green" or "blue" sitting somewhere in the title bar, but Delaboratory shows you the range of colors directly on the graph. Same with the mixer, which is not usually intuitive to work with. Plus, each tool dialog comes with pre-set buttons that let you quickly experiment: adjusting curves and mixtures by simple fractional ratios, inverting colors, and so forth. You may not always want to use a preset, but having the option is welcome.

These small touches may not sound like much when you are thinking in RGB terms, but they are available in all of Delaboratory's color modes; Lab, LCH, XYZ, HSL, and so on. Understandably, a lot of people have trouble visualizing Lab color, and some other options (such as XYZ) do not even correspond directly to the visual spectrum. Making those color spaces easy to work with is a noteworthy feat.

The down side to Delaboratory's stack-based "adjustment layers" is that you have to adjust your thinking to be stack-based as well. If you ever learned stack-based programming or Reverse Polish notation, then first of all, you have my sympathy, but second of all, you know how difficult it can be to adapt. In my estimation, Delaboratory is a bit too strict with this stack metaphor — you can only remove the top-most layer from the stack, and you cannot rearrange the layers. Many of Delaboratory's simplified sliders and tool are easier to use than their counterparts in traditional apps, but the benefits do not extend to every aspect of the UI.

Poplawski describes his own workflow on the wiki: he shoots images in raw mode, and converts them to TIFF format using RawTherapee, usually doing simple exposure adjustment and white-balancing at the same time. Then he makes his color corrections in Delaboratory, and exports the results. If there is a retouch needed (such as blotting out a spec of dust), he can do that in GIMP, because after the color corrections are made, the high-bit-depth work is pretty much finished.

As to why color correction is so important — important enough to warrant writing an application that does nothing else — Poplawski's argument is that natural-looking skin tone is what separates a good photo from a lousy one. For the most part, that is true — the first things we look at in a photo are the people. Getting skin tones right can be tricky, especially when artificial lights are involved. Similarly, a lot of people like converting their images to gray scale or sepia-toning them.

Delaboratory can definitely make both of those tasks easy, while preserving image quality better than most of the 8-bit alternatives. However, at the moment Delaboratory should still be considered a work-in-progress. Although it does what it does skilfully, and includes a lot of nice UI touches, it also requires you to keep another raw converter application at your fingertips and general-purpose image editor (e.g. GIMP), too. Hopefully when it adds more basic image editing functionality, more users will give it a try. Chances are they will like what they see; the challenge for the development team will be to integrate that experience into a less complicated editing workflow.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

The Best of Linux.com Weekend Project from 2011

Weekends are for relaxing, spending time with friends… and tackling those tech projects that you never have time to get to during the week. The weekend project is one of the most popular features here on Linux.com, and we had a bumper crop of excellent projects in 2011. Here’s 10 of the best from 2011, which include everything from better ways to upgrade your system, to getting a leg up on Web projects.

Reproducing your current environment on when you upgrade isn’t always easy. Nathan Willis undertakes a from-scratch reinstall, and provides valuable lessons that you can use when you tackle your next migration.

Debian developer Joey Hess starting writing etckeeper after unsatisfying experiments with other people’s attempts to shoehorn /etc/ into a Git repository. A few people had done so successfully, but ran into two major problems: what to do when a package installation made changes to the directory or a file (i.e., and the user could not enter the usual log entry), and what to do about metadata changes like file permissions. This weekend project shows how you can control your configuration with etckeeper.

Ever had a Web project in mind, but got stalled at the prospect of having to worry about the site design? If code, and not design, is your strong point you’ll want to take a look at Twitter’s Bootstrap.

For the security conscious, there is always room for another weapon against attackers. Firewalls, intrusion detection systems, packet sniffers – all are important pieces of the puzzle. So too is Honeyd, the “honeypot daemon.” Honeyd simulates the existence of an array of server and client machines on your network, including typical traffic between them. In this weekend project, you’ll learn how to use HoneyD on Linux to fool attackers.

If you’ve ever needed to edit one or more files to make quick changes, you’ve no doubt found that doing it using a text editor can be a slow slogging process. Linux, thankfully, has a number of tools that make it easy to do this non-interactively. One of the best is sed, a “stream editor” that can help you make quick work of filtering and transforming text. Use this weekend project from 2011 to get to know GNU sed.

One of the keys to using GNU sed successfully is knowing how to use its regular expressions. If you look over sed scripts without knowing regular expressions, the effect can be pretty disconcerting. Don’t worry — it’s not as confusing as it looks. If you’ve read the “get to know GNU sed” piece, step up your sed game with an intro to using sed regular expressions.

Just about every program has spell checking, but what if you’re grammatically challenged? Most apps don’t have a grammar checker built-in, even though they should. Want to get a leg up on your writing? Learn how to get grammar checking for your open source office suite. Note that After the Deadline not only works with LibreOffice, it also works with tons of other apps – including my favorite, Vim.

The Gentoo-based SystemRescue CD/USB is one of the very best rescue distros, packing amazing functionality into a 350MB image. It can rescue Linux, Unix, Mac, and Windows systems, and recover data from almost any media. With this weekend project, you’ll learn how to create a SystemRescue live USB stick, and recover data from failing drives.

Ubuntu switched to Thunderbird a while back, and Linux Mint has defaulted to the Moz mailer for quite some time. If you want to make the most of Thunderbird, though, you should check out this weekend project from July on adding conversations and calendaring to Thunderbird.

Have a book you want to publish? Ready to self-publish that Great American Novel that you’ve been tinkering with since high school? These days, the barriers to publishing a book are lower than ever. All you need is a manuscript and an open source application like Sigil to tame the text and massage it into EPUB format. If writing and publishing is on your resolution list for 2012, learn how to write and publish eBooks on Linux with Sigil.

Have some ideas what you’d like to see on the weekend project for 2012? Give us a shout in the comments. Happy New Year!

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

How to Craft a Killer Cover Letter for Linux Jobs

You're certified, bona fide, and active in your favorite open source project, but how do you craft the clever cover letter that lands your next Linux job?

If you're lucky, your reputation precedes you and dream job offers land in your in-box every other day. If you're like the rest of us, a well-crafted cover letter can make the difference between getting a phone call, or getting shuffled to the bottom of the resume pile. Before prospective employers look down the list of Linux skills and open source project experience you've laid out on your resume, they'll do a quick read of your cover letter, which is your one chance to make a fabulous first impression.

I've long hated writing cover letters. Once you've written your resume, it's pretty much good to go, with occasional updates or tweaks to highlight skills that would appeal to specific employers. Cover letters, on the other hand, need to be written with a specific employer or position in mind. And knowing that these few paragraphs need to say more than your resume does can be nerve wracking. With a few rules in mind, you can take the torture out of writing and focus on why you are the only logical solution to an employer's high-tech needs.

First, the good news: No one wants to read your dissertation. Instead, keep your cover letter brief, yet meaty (like this rule).

Before writing anything, research the company and position for which you are applying. Read the press releases on the official company site, search the web to learn more about what others are saying about the company, and consider how your skill set will be an asset to the employer. Remember that the employer wants to know how you will meet his or her needs, so don't focus on why this job would meet your needs (unless your needs are all about focusing on the needs of your prospective employer).

In her Dice.com article called How to Target A Cover to Get the Manager’s Attention, Leslie Stevens-Huffman explains, "Don't belabor points or regurgitate the information in your resume. Create a short, compelling narrative that proves you understand the company’s needs and describes how you intend to meet them."

Gayle Laakmann McDowell, author of The Google Resume and Cracking the Coding Interview, says that how you discuss your open source work depends on the type of job for which you are applying. "If you're applying for a coding role, you should focus on the code you wrote (that is, the features)," she says. "If you're applying for a Program Management job, you should focus on the leadership aspects."

Instead of going into detail in the cover letter, you could put this open source experience under a Projects section of your resume or under Work Experience, particularly if you've spent a substantial amount of time on an open source project. "Additionally, providing a link to your GitHub page or another page where a resume screener can learn more about your coding experience is always useful," she adds.

"As a hiring manager for software engineers, I'm always happy to see that a candidate is a contributor to open source projects," says Jenson Crawford, Director of Engineering at Fetch Technologies. "Participation in open source projects tells me that technology is a passion for the candidate and it's not just a job. It also shows that the candidate is interested in making things better than they were found," he says.

Crawford agrees with McDowell when it comes to including open source experience on a cover letter. "If a candidate's open source contributions can demonstrate that the candidate has the needed qualifications or experience, include the contributions to make the connection to the hiring company's needs," he explains. "If there isn't a direct connection between the candidate's open source contributions and the qualifications listed in the job posting, or if the candidate is applying to a company without a specific job posting, include information about the contributions in a general way. Perhaps something like: I'm passionate about technology and contribute the open source projects X, Y, Z."

Let's say you are applying to be a web developer and you're interviewing for a company that requires Drupal experience. Here's your opportunity to show that you've researched the company and have the exact skills they seek. The job description says:

Experience with and a high degree of competency in Drupal is a must, including development of modules and themes, and familiarity with the Drupal API, hook system, form API, etc.

So your response to this job requirement might be something like, "In addition to developing more than two dozen Drupal modules and themes for my current employer, I'm also active in the Drupal community and recently gave a DrupalCon Lightning talk about submitting patches." Now the prospective employer knows that your resume includes required skills, but you've also added something more personal about your specific skills, which brings us to the next rule.

Showing your personality in a resume is no easy feat, so be sure to do it in your cover letter. "Not only do you want to show that you're a good fit for the position, but you also want the reader to like you," explains Kim Isaacs, Monster Resume Expert. "Appropriate use of humor, combined with a friendly and professional tone, can help endear you to the hiring manager." Still, keep in mind that the cover letter is a formal document and not an email to your BFF, so keep the tone professional, too.

If you're sloppy on your cover letter, employers can safely assume you'll be sloppy in your code or work habits. In addition to spell checking your letter, consider having a friend or colleague proofread it, too. Reread the job description and your letter — have you shown that you understand the position for which you are applying and that you're the ideal candidate for the role?

You have as long as you need to proof the cover letter before sending it in, so be sure that it doesn't make a trip to the circular file because it's sloppy.

If you're not quite ready to apply for that great Linux job, consider ways to get involved in the community at events or with Linux training opportunities.

What other advice do you have for crafting clever cover letters? Share your success stories (or horror stories) in the comments below.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here