Pages

What Programming Language Should I Use to Build a Startup?

Often entrepreneurs ask me 'What technology should I build my startup on?' There is no right or wrong answer to this question. It's a decision every company makes for itself, depending on what it's trying to build and the skills of its cofounders. Nonetheless, there are a few rules that one should adhere to. We discuss them in this blog post.

Incident Response Policy

What happens in your company when a production incident occurs? Usually in a typical startup, you will see engineers running around frantically trying to resolve the problem. However, as soon as the incident is resolved, they forget about it and go back to their usual business. A good incident response policy can help bring order into chaos. We provide a sample template in this blog post.

Why Software Deadlines Never Make Sense

We discuss why software deadlines usually don't make sense.

Analyzing Front-End Performance With Just a Browser

We discuss a number of freely available online tools which can be used to analyze bottlenecks in your website.

Why Smaller Businesses Can't Ignore Security and How They Can Achieve It On a Budget

In this article, we show that security is both important and achievable for smaller companies without breaking a bank.

Thursday, March 31, 2011

Determining your stress level

The picture below has 2 identical dolphins in it. It was used in a case study on
stress level at St. Mary's Hospital.

Look at both dolphins jumping out of the water. The dolphins are identical.

A closely monitored, scientific study of a group revealed that in spite of the fact that
the dolphins are identical; a person under stress would find differences in the two
dolphins. If there are many differences found between both dolphins, it means that
the person is experiencing a great amount of stress.

Look at the photograph and if you find more than one or two differences you may
want to take a vacation.

Scroll down slowly.


Saturday, March 26, 2011

White to move and mate in 3


Nice brainteaser. I had to think about this one for a long time.

Who Not to Hire

Interesting post where an author argues why .NET programmers should not be hired:
http://blog.expensify.com/2011/03/25/ceo-friday-why-we-dont-hire-net-programmers/


He claims that
1) startups never use .NET
2) .NET does everything for the programmer, so the programmer has no idea underneath the covers.

To dispute argument 1, there are a multitude of startups which are successful and use .NET : ZocDoc, Phreesia, etc.

To dispute argument 2 is harder. But you could say the same about Ruby on Rails or Java or a lot of other programming languages nowadays. Just having .NET on the resume should not scare you away from hiring a person. You should instead dig deeper and see if a person knows how the language operates under the hood.

Friday, March 25, 2011

Photoshoot in Gilt Groupe's warehouse

Sunday, March 20, 2011

Google's quest to build a better boss

New York Times published a nice article about what it takes to be a good engineering manager. Google engineers wrote a program to analyze a set of their employees' performance reviews, correlate the key words, and come up with must-have criteria to be a good boss.

I liked this quote:
“In the Google context, we’d always believed that to be a manager, particularly on the engineering side, you need to be as deep or deeper a technical expert than the people who work for you,” Mr. Bock says. “It turns out that that’s absolutely the least important thing.

It's true. While it helps to be very strong technically. It doesn't necessarily make you a good manager. I am surprised they had to write a program to find this out.

Performance of ASP.NET on Windows/IIS vs Linux/Apache

Overview

The rivalry between proponents of .NET and Java frameworks has always reminded me of rivalry between Red Sox and Yankees. Each framework has its ups and downs, but in the end is a solid framework for designing enterprise-level applications. One major drawback of .NET compared to Java is that it’s not really platform independent and is geared heavily towards the MS Windows Operating system. The Mono project is a cross platform, open source .NET framework implementation. Mono makes it possible to run .NET applications including ASP.NET under Linux using Apache, and not to be bound into any specific technologies with their associated licensing fees.

I’ve always anticipated Apache to be faster than IIS but never tested it. This weekend, I had some spare time on my hands, so I decided to compare the performance of a typical ASP.NET application running on Windows with IIS web server versus its performance under Linux with Apache. What I found out is that on average Apache was serving requests three times faster than IIS.

A word of caution is that this comparison wasn’t scientific, as different background processes may have affected the timing. We’re also comparing ‘apples’ to ‘oranges’ since the Mono framework doesn’t implement the full breadth of features available with Microsoft .NET CLR. However, for simple programs, I am still convinced that using Apache will yield better performance. Below are my notes on the performance evaluation.

Test Platform

I use an Intel Core 2 Duo CPU T6600 @ 2.20 Ghz Mhz, 2 core, 2 logical processors with 4GB RAM at home. It’s partitioned to dual boot into ether Windows Vista or Ubuntu Linux. So I could execute the tests onto two different stacks of technology under identical hardware.

Test Program

I’ve used a test program from the following CodeProject’s Mono article. It’s effectively a ‘hello world’ program on steroids, which prints your computer’s configuration along with details of the HTTP request.

using System;
using System.Web.UI.WebControls;
 
namespace SimpleWebApp
{
   public class SimplePage : System.Web.UI.Page
   {
      protected Label operatingSystem;
      protected Label operatingSystemVersion;
      protected Label requestedPage;
      protected Label requestIP;
      protected Label requestUA;
      protected Label serverName;
   
      protected override void OnLoad(EventArgs e)
      {
         DisplayServerDetails();
         DisplayRequestDetails();
 
         base.OnLoad (e);
      }
 
      private void DisplayServerDetails()
      {
        serverName.Text = Environment.MachineName;
        operatingSystem.Text = Environment.OSVersion.Platform.ToString();
        operatingSystemVersion.Text = Environment.OSVersion.Version.ToString();
      }
 
      private void DisplayRequestDetails()
      {
         requestedPage.Text = Request.Url.AbsolutePath;
         requestIP.Text = Request.UserHostAddress;
         requestUA.Text = Request.UserAgent;
      }
   }
}

Benchmarking

For benchmarking, I used Apache Benchmark tool, which fired off 8,000 requests to the webpage with a 100 thread concurrency. On average, the Apache with mod_mono module was able to serve requests three times faster than an IIS installation.

1. Apache with mod_mono under Ubuntu

$ ab -n 8000 -c 100 -g out.dat http://localhost/index.aspx
This is ApacheBench, Version 2.3 <$Revision: 655654 $>

Server Software: Apache/2.2.11
Server Hostname: localhost
Server Port: 80

Document Path: /index.aspx
Document Length: 1992 bytes

Concurrency Level: 100
Time taken for tests: 20.273 seconds
Complete requests: 8000
Failed requests: 6
(Connect: 0, Receive: 0, Length: 6, Exceptions: 0)
Write errors: 0
Non-2xx responses: 6
Total transferred: 18559720 bytes
HTML transferred: 15927990 bytes
Requests per second: 394.62 [#/sec] (mean)
Time per request: 253.408 [ms] (mean)
Time per request: 2.534 [ms] (mean, across all concurrent requests)
Transfer rate: 894.05 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.3 0 24
Processing: 19 252 55.6 244 619
Waiting: 13 252 55.7 243 619
Total: 28 252 55.1 244 619

Percentage of the requests served within a certain time (ms)
50% 244
66% 258
75% 267
80% 273
90% 297
95% 340
98% 407
99% 560
100% 619 (longest request)

2. IIS under Windows Vista

C:\> ab.exe -n 8000 -c 100

http://localhost/index.aspx

This is ApacheBench, Version 2.3 <$Revision: 655654 $>

Server Software: Microsoft-IIS/7.0

Server Hostname: localhost

Server Port: 80

Document Path: /index.aspx

Document Length: 1964 bytes

Concurrency Level: 100

Time taken for tests: 63.468 seconds

Complete requests: 8000

Failed requests: 0

Write errors: 0

Total transferred: 17632000 bytes

HTML transferred: 15712000 bytes

Requests per second: 126.05 [#/sec] (mean)

Time per request: 793.347 [ms] (mean)

Time per request: 7.933 [ms] (mean, across all concurrent requests)

Transfer rate: 271.30 [Kbytes/sec] received

Connection Times (ms)

min mean[+/-sd] median max

Connect: 0 7 5.3 6 132

Processing: 162 780 189.1 710 1665

Waiting: 12 435 255.3 420 1576

Total: 167 787 190.4 716 1683

Percentage of the requests served within a certain time (ms)

50% 716

66% 751

75% 776

80% 805

90% 1021

95% 1272

98% 1453

99% 1527

100% 1683 (longest request)

3. A picture is worth a thousand words

Here is a comparison plot as the number of requests increase.

Friday, March 18, 2011

Moscow method for project management

One technique which I really like to use for prioritizing software development projects is the MoSCoW approach.
The capital letters in MoSCoW stand for:
  • MUST have this
  • SHOULD have this
  • COULD have this
  • WON'T have this.

    Developers love to write code. However, you will save lots of time in the future if
    before you start implementing the software project, you prioritize all the requirements according to this simple method.
    Focus your time getting all the MUSTs in place, and thoroughly testing them. Only after they are in place, focus on the SHOULDs. If you have extra time left, go for a walk, get a Starbucks latte, sit back and then think if you could squeeze in the COULDs.


  • Follow-up recommendations from RSA

    Overall Recommendations:

    RSA strongly urges customers to follow both these overall recommendations and the recommendations available in the best practices guides linked to this note.

    * We recommend customers increase their focus on security for social media applications and the use of those applications and websites by anyone with access to their critical networks.
    * We recommend customers enforce strong password and pin policies.
    * We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.
    * We recommend customers re-educate employees on the importance of avoiding suspicious emails, and remind them not to provide user names or other credentials to anyone without verifying that person’s identity and authority. Employees should not comply with email or phone-based requests for credentials and should report any such attempts.
    * We recommend customers pay special attention to security around their active directories, making full use of their SIEM products and also implementing two-factor authentication to control access to active directories.
    * We recommend customers watch closely for changes in user privilege levels and access rights using security monitoring technologies such as SIEM, and consider adding more levels of manual approval for those changes.
    * We recommend customers harden, closely monitor, and limit remote and physical access to infrastructure that is hosting critical security software.
    * We recommend customers examine their help desk practices for information leakage that could help an attacker perform a social engineering attack.
    * We recommend customers update their security products and the operating systems hosting them with the latest patches.

    Thursday, March 17, 2011

    RSA hacked through advanced persistent threats

    An article by TechTarget claims that RSA has been hacked through an APT (advanced persistent attack). An advanced persistent attack is a custom developed Trojan, which is not detected by any of the anti-viruses. Usually once the Trojan has been planted, it stays in the system dormant for a long time, trying to remain undetected, and strikes at an opportune moment.
    What's worrying is that an article claims that this attack could potentially be used to reduce effectiveness of SecurID two-factor authentication.

    I highly doubt that this is the case, because a SecurID is effectively a series of cryptographically secure pseudorandom numbers F_SK(x) generated from a seed SK, for varying inputs x typically derived from the clock. The details of SecurID algorithm are confidential, but I would guess the seeds SK are unique per each user and are stored on RSA SecurID servers, hosted by individual companies. Because the servers are hosted outside of RSA network, it's unlikely that seeds can be stolen. I would also hope that RSA algorithm is sound, so even if it's known, it should have no impact.




    Wednesday, March 16, 2011

    Obfuscating Javascript redirects

    Modern malware uses various Javascript obfuscation techniques to make it hard to understand its source code.
    Kahu Security has a very nice blog post describing various techniques that this can be done:
    http://www.kahusecurity.com/2011/making-wacky-redirect-scripts-part-i


    The technique I liked the most is the onerror technique which
    counts the characters between semi-colons (“;”) to construct JavaScript code that redirects the user to Google.
    Kudos to the author!

    Monday, March 14, 2011

    Seven Hardening Steps For Your Windows Machines That You Wish You Knew About

    By now, security idioms such as “Use a strong administrative password” or “Install anti-virus software” are firmly ingrained in the minds of most System Administrators.
    The steps comprise the regular hardening procedures for most production systems.
    In this article, we describe seven hardening steps that you commonly don’t hear about, yet which are a crucial linchpin in ensuring security of your Windows servers.
    These steps should be considered complementary to the regular hardening steps and are not meant to replace them.

    1. Stop sharing all your files with the world. By default Windows enables sharing for each logical disk on your system. The intent of this feature was to enable the administrator to remotely access the machine via the network. For example, by typing “cd ADMIN$” the administrator can gain access to %systemroot% on that machine and by typing “cd C$” to the root directory of drive C:\. Of course the hacker, who cracks the administrator’s password can do the same. So it’s recommended to disable this functionality on critical servers.


  • To disable these shares, find the HKLM\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters registry key. Add the DWORD values AutoShareWks 0 and AutoShareServer 0 to the registry key. Verify that the default shares no longer exist by running net share --- you should not see any.


    2. Secure your temporary folders.

    Many applications create temporary files while you do your work. Most of the time these files are readable and writeable by any process. In theory, they should be cleaned up when the programs exit. But in practice it doesn’t always happen. Leaving them in the folder presents a bounty to the hacker. To protect your Windows servers, we recommend doing two things: encrypting contents of their temporary folders and regularly searching for sensitive data using a Spider tool.


  • Open an Explorer Window, and go to C:\Documents and Settings\\Local Settings\Temp. Right click the folder and choose Properties-Advanced Attributes. Select “Encrypt contents” to secure data.
  • Sensitive data can lurk in many places, outside of temporary folders. You can check for presence of sensitive information (such as credit card data, passwords, personally-identifiable information) using Spider tool (http://www2.cit.cornell.edu/security/tools/).


    3. Enable outbound filtering on the firewalls

    Most organizations enable inbound filtering on their firewalls to prevent the bad guys from getting in, but they forget about outbound filtering. Outbound filtering aims to stop sensitive data from getting out of the CorpIT network even if the attacker managed to penetrate the defenses.
    It makes sense to block traffic on the following ports from exiting the network : (25 SMTP, 139 NetBios, 143 IMAP, 445 SMB, 3389 RDP, 5900 VNC, and other various Trojan ports). You can see a good list of dangerous ports at the following link : here The ports can be blocked either at the network firewall level, or on firewalls of individual servers, or both!




    4. Log suspicious activity to help spot intrusions.

    To enable auditing in Windows, go to Control Panel -> Administrative Tools -> Local Security Policy. In the Local Security Settings window, click on Local Policies then Audit Policy. At the bare minimum, you should enable auditing for login events, privilege escalation, bad user names or passwords.



    Logs contain a treasure trove of data. But they are useless if you don’t log proper events or don’t actively monitor them.
  • So make sure to aggregate your server logs into a SIEM (security event manager) system, you ensure that attacker can’t hide his traces by removing logs on individual servers, and facilitate log’s review and correlation.


    There are a multitude of SIEM solutions out there; some prominent ones include Splunk, Tripwire log center, and RSA Envision. Also if you are on a budget, you could write a script which copies all the windows security/audit logs onto a central windows server.




    5. Set up intrusion detection software on your servers.

    Installing anti-virus on your servers is no longer sufficient. Zero-day malware exploits vulnerabilities that are unknown to others, and usually doesn’t have its signatures included in the anti-virus database. According to AusCERT (Australian computer emergency team), top-selling anti-virus solutions let in nearly 80% of the new malware.
    There are two emerging approaches to combat viruses.

  • First, is to whitelist software which is considered safe to run, while blocking all other software (Bit 9).
  • Second, is to install a host intrusion detection system (HIDS) which use machine learning algorithms to differentiate good software from the bad.


    6. Disable any unnecessary remotely accessible services.

    In a typical network layout, DMZ servers will be hardened and have few ports open to the outside world; yet, internal servers may have a multitude of various ports ranging from VNC and Bonjour to unauthenticated MYSQL. The flawed thinking is that an attacker won’t get through the gateway firewalls, so there’s nothing to worry about. In reality, it’s a flawed strategy which can fall apart like a house of cards.

  • Just like in DMZ, you should minimize the number of services running on any one internal server, on a need-per basis. Furthermore, isolate the sensitive servers into a completely separate VLAN not shared with any others.
    On individual servers, you can use Foundstone Fport program to see which remotely accessible services are open (http://www.foundstone.com/resources/proddesc/fport.htm).
    You should be able to explain every service that’s open. If you are in doubt, then close it.


    7. Patch your servers correctly

    Many companies manually patch their servers when critical patches come out. But they don’t have a sound strategy which articulates the patch lifecycle or sets the SLAs for fixing the vulnerabilities.

  • Make sure to develop such a strategy. Instead of applying patches manually, use a group policy to configure automatic updates in an Active Directory environment so that all the critical patches are applied automatically. Also prevent non-administrative users from being able to manually apply automatic updates.

    Go to User Configuration – Administrative Templates – Windows Components – Windows Update and select “Remove access to use all Windows Update features.”
    To find out which patches are missing, you can use Microsoft Baseline Security Analyzer to assess the security of the host.




    Dr. Aleksandr Yampolskiy heads Security and Compliance team at a well-known e-commerce company. Prior to that, he has been a lead technologist, developing SSO, authentication and authorization solutions in several Fortune 100 companies. Aleksandr has advised various businesses on best practices for integrating security into their products, while complying with external policies and regulations. He has been cited in NY Times, Yale Scientific, and published half a dozen articles in top cryptographic conferences. In 2006, he has been awarded the Best Paper Award in Public Key Cryptography conference for discovering the most efficient verifiable random function to-date. He has a B.A. in Mathematics from New York University, and a Ph.D. in Cryptography from Yale University. He maintains his website at http://www.alexyampolskiy.com. You can also follow him on Twitter at http://www.twitter.com/ayampolskiy.

  • Nice comparison of MongoDB, MemCached, CouchDB, PSQL, and MYSQL

    Sunday, March 13, 2011

    SHA-3 status report is published

    NIST has published the list of finalists for the SHA-3 hash algorithm, along with summaries of second-round candidates. Here it is.

    Friday, March 11, 2011

    Seven Samurai

    I am currently reading a book by Joel Spolsky "Smart and Gets Things Done"
    It's a good book which describes how to cultivate and attract tech talent into your organization.

    In the book, Spolsky makes an interesting analogy using Akira Kurosawa's classic "Seven Samurai" (1954). In the movie a poor village is under attack by bandits, so it recruits seven unemployed samurai to help the village defend themselves. The village is poor and can barely afford to feed the samurai, let alone pay them. Yet the samurai still agree to come and defend the village out of good intent. Spolsky comparing hiring developers to hiring the seven samurai.
    Many startups can't afford to pay high salaries, so they must captivate the hearts of developers who join their startups with their idea.

    Ultimately, people who love what they work on are going to be productive, and in the end it's not about the amount of money you pay them. Lots of my friends on Wall Street rack up good bonuses at the end of the year, but it doesn't compensate them for high blood pressure, lack of life-balance, and gray hair.




    Tuesday, March 8, 2011

    Resilience

    A great quote from "Linchpin" by Seth Godin on importance of resilience.

    "You will fail at this. Often. Why is that a problem? In fact, this is a boon. It's a boon because when others fail to be remarkable or make a difference or share their art or have an impact, they will give up. But you won't, you'll persist, pushing through the dip. Which means that few people will walk in the door with your background, experience, or persistence."

    Friday, March 4, 2011

    Five Questions You Must Ask Your PCI Auditor Before You Hire Him

    Cautionary Tale Told By My Friend
    The QSA walked into the conference room, sat down and took out a thick beige folder labeled “PCI”. She looked uneasy as she started going down the list of questions from top-to-bottom:
    - “Umm okay, requirement 8.1. Identify all users with a unique user name … Do you do that?” she asked.
    - “Yes, we do. However in a few cases we have to use generic accounts in our legacy systems. Access to these generic accounts is restricted and carefully audited.”
    - “That’s no good. You have to re-architect your legacy systems”, she replied.
    She took out a gum and started chewing, staring intently while waiting for the response.

    At that point my friend (and a fellow CISO at an online retailer) paused and nervously sipped his tea. The next day he called the QSA’s manager and she was taken off the project. He had to search for a new QSA.

    What is PCI?
    The PCI DSS (payment card industry data security standard) was created by a consortium of credit card companies to enforce a set of minimum security standards. If your company accepts credit cards as a form of payment, then it must comply with the PCI standard. Companies, who are Level 1 merchants (processing over 6 million Visa transactions annually), must undergo an onsite data security assessment by a QSA (qualified security assessor) who signs off on their PCI compliance status.

    Unfortunately, many organizations view PCI compliance as a “necessary evil” and aim to get it over with as quickly as possible. They will hire the first QSA that comes along who is going to check off the boxes in the PCI questionnaire and give a stamp of approval. Many smaller companies who are not Level 1 merchants are eager not to hire a QSA to save money since they are not obligated to do so. These are all crucial mistakes. You want to use PCI compliance to tighten the security in your company, and you don’t want a QSA to let you off easy. You want your QSA to be knowledgeable, fair and impartial.

    The Five Questions You Need To Ask
    Before hiring a QSA, make sure you obtain as much information as possible about him. So before letting him come in and grill you, turn the tables around and interview your QSA.

    We now list the five questions that you can’t hire your QSA without asking:

    1. Can you show me your CV?

    A good QSA needs to have sufficient security and technical skills to effectively perform the audit. Just understanding the PCI DSS specification is not enough. The best QSAs will have a background in Information Security and experience working as a penetration tester, risk analyst or a CISO. This background will enable them to make tough judgment calls, assess if your firewall configurations are correct, and understand if your compensating controls are sufficient. Note that you can verify the QSA’s standing at the following link: https://www.pcisecuritystandards.org/approved_companies_providers/verify_qsa_employee.php Also, you can use this link https://www.pcisecuritystandards.org/pdfs/pci_qsa_list.pdf to check QSA companies which have at least one QSA who failed to perform an adequate PCI DSS assessment.

    2. What types of companies have you provided assessments for?

    Make sure that the QSA you hire has performed PCI audits of companies in your line of business and knows the challenges that they face. If you are in an e-commerce business, selling luxury goods online, and the QSA has only dealt with large financial services firms, he may not be the right guy for the job.

    3. What is your stance on compensating controls?

    The goal here is to eliminate QSAs who read the requirements verbatim. The only requirement written in stone is requirement 3.2 (Do not store sensitive authentication data subsequent to authorization). All other requirements need to convey intent. For example, it’s perfectly ok to use a compensating control if you can’t assign a unique ID to every user (requirement 8.2) as long as you document and monitor all generic, shared IDs. A good QSA will understand that, while a bad one will force you to waste valuable time re-architecting your system without making it secure just to meet the requirement verbatim. When you interview a QSA, don’t be afraid to dig in further - give a concrete scenario where your company had to use a compensating control and see how the QSA reacts.

    4. How much do you charge, and what are the deliverables at the end of the engagement?

    The best QSAs are usually employed by the best QSA companies and they do not come cheap. Be wary of selecting a security vendor, who also happens to do PCI audits on the side, just to save a few bucks. A simple PCI audit which lasts a few weeks onsite will cost you $20K-$30K. More extensive PCI audits will cost on the order of $100K. So if you have allocated low budget towards a PCI audit, you are setting yourself up for failure.

    There are two types of services which QSAs usually offer:
    - Gap analysis (a.k.a a PCI preparedness exam): A QSA will assess how prepared you are for the PCI audit. He will advise you on what remains to be done, and what controls you may need to change to pass the audit. The results of a gap analysis are not reported.

    - PCI audit: This is the real thing, where a QSA will request documentation, interview appropriate personnel, and assess your controls. At the end of the audit, you should expect to receive an IROC (initial report on compliance). You will have roughly 4 months to remediate any found gaps, after which you will receive a final ROC (report on compliance).


    5. Will you be available throughout the year in case we have any questions?

    PCI compliance represents a point in time and not a permanent state of being. You will have to maintain PCI compliance throughout the year and naturally questions may arise. You want to stay in touch with your QSA auditor, maintain a good relationship, and hopefully hire him to come back the following year. So if your QSA tells you that he is planning a two-year vacation to Hawaii or that you need to route all questions through his company’s support line, run away!

    This article has been cross-posted from Gilt Technology blog http://tech.gilt.com.
    Dr. Aleksandr Yampolskiy heads Security and Compliance team at Gilt Groupe. Prior to that, he has been a lead technologist, developing SSO, authentication and authorization solutions in several Fortune 100 companies. Aleksandr has advised various businesses on best practices for integrating security into their products, while complying with external policies and regulations. He has been cited in NY Times, Yale Scientific, and published half a dozen articles in top cryptographic conferences. In 2006, he has been awarded the Best Paper Award in Public Key Cryptography conference for discovering the most efficient verifiable random function to-date. He has a B.A. in Mathematics from New York University, and a Ph.D. in Cryptography from Yale University. He maintains his website at http://www.alexyampolskiy.com. You can also follow him on Twitter at http://www.twitter.com/ayampolskiy.