When was the last time you saw a good documentary about the origins of computer hacking? Well, Code 2600, a new documentary film from a young filmmaker named Jeremy Zerechak comes really close to being both accurate and entertaining while at the same time scaring the pants off anyone who doesn’t yet know that computer data is eternal and can be stolen by the wrong people if we’re not careful. So it is fitting that the documentary, which is only available in limited release right now, will be shown next Friday at DefCon, the world’s largest hacker conference and this year also celebrating its 20th anniversary.
Code2600 is a rich visual history of computer hacking’s past as told by some of its principal participants.
The film opens with news of a Soviet satellite orbiting the earth in the late 1950s. The United States, which once thought itself on top of the world in technology, found itself behind. Suddenly, says Zerechak, the US military was keen on computer technology. He points out that in the 60s and 70s the military had all the best high-grade computer equipment, but after the computer revolution of the 80s and 90s that was no longer the case, with the military today buying off-the-shelf mobile devices.
Somewhere in those intermediate 60 years of military history we have the origins of computer hacking.
Like Steven Levy’s 1984 classic book Hackers, the film explores early computer hackers who studied the original wired telephone switching system. One hacker, John Draper, discovered that the sound produced by an inexpensive Capt’n Crunch cereal toy whistle could interrupt the normal AT&T long-distance billing process. This 2600 hertz tone (hence the title of Zerechak’s documentary) was very important to early hackers, known as Phone Phreaks, who wanted to access fast computers on the other side of the world without paying long distance charges. AT&T, at great expense, began to change its switching system.
Around the same time, the Homebrew Computer Club was starting in the San Francisco Bay Area. Member Bob Lash remembers a young Steve Wozniak showing off his early Apple computers â€“ along with everyone else who was also building their own computers at the time. There was a lot of trial and error. But smart people where able to do very sophisticated things at home.
Throughout the film, Zerechak uses classic footage to capture a moment or to make a point. One reoccurring sequence is the 1950s black and white footage of Dr. Claude Shannon, mathematician, cryptographer and the father of information theory, with his metal mouse and its square maze. This was one of the first experiments in artificial intelligence, demonstrating how Theseus, his robotic mouse, could learn and adapt to a rapidly changing environment. This is an obvious metaphor for computer hackers who probe the phone networks, and later the Internet, simply wondering what is connected to what.
In one of his interview segments, Marcus Ranum, Chief Security Officer at Tenable Security, says that in the early days there was limited addressing. In other words, without a Google search, you had to know where on the Internet you wanted to go. Or, like the metal mouse, you had to search until you found something new or interesting. Often, you used your phone modem to find other phone modems. In looking for computers set with default “guest” accounts, hackers used war dialing — randomly dialing phone numbers until they got a computer on the other end — to access corporate or military computers. At the time, says Ranum, system administrators would laugh at logs that showed 800 attempts for access using the default word “guest.” But that was when the Internet was still an intimate community of military, academics, and a few curious hackers, barely a few years removed from the days of the early ARPANET that predates today’s Internet.
The upcoming shift, from in invite-only world to what we have today, is important; that’s when hackers realized they were no longer alone on the Internet and had to go underground. Jeff Moss, founder of Black Hat and DefCon, describes in one of his interview segments growing up in the Bay Area in the 1980s and having one of the first affordable home computers that, with a modem, connected over the phone to various bulletin boards. He says that he could connect and no one would know his true identity or age; he would only be judged by what he wrote. For a 14 year old boy, Moss says it was liberating to be able to talk about sex and drugs.
Then in the early 1990s, Moss says AOL, Prodigy, and CompuServe destroyed the local community bulletin board, opening up what had been an exclusive neighborhood of thought and discussion to the entire world. It created a gold rushâ€”it gave us spamming and phishing which both got started only once the masses starting surfing the net. It also threatened to push the curious hacker community into a dark corner — until Moss founded DefCon in the summer of 1992. DefCon is a real-world computer bulletin board where communities of hackers and law enforcement talk openly about the Internet with an eye toward fixing what is broken.
Not every computer hacker is malicious; Moss makes the point that there are good plumbers and bad plumbers. And not all famous computer hackers are ex-felons like Kevin Mitnick. Zerechak’s film includes footage of the Boston-based L0pht Heavy Industries members testifying before Congress in May of 1998, saying confidently that they had the knowledge to take down the Internet in 30 minutes (but also that they wouldn’t do it). Today, one of the original members of L0pht, Peiter Zatko aka “Mudge,” works for DARA. Another, Joe Grand aka “Kingpin,” runs a hardware design studio in San Francisco. And even Moss, who wasn’t part of Lopht, has served on President Obama’s Homeland Security Advisory Council and is today ICANN’s Chief Security Officer.
The film digresses into the important privacy issues we face today, with insight from Jennifer Granick, who at the time of production was a lawyer with the Electronic Frontier Foundation (EFF), and Lorrie Cranor, a researcher with Carnegie Mellon University’s CyLabs. They remind us that with each digital transaction we’re leaving digital breadcrumbs everywhere, and that we don’t always have a say in how that information might later be used.
One of the really cool moments within the documentary is when penetration tester Gideon Lenkey shows off a mobile version of the Metaploit software running on an iPhone: Lenkey uses it to log into a Windows laptop in an open Internet CafĂ©. Lenkey also reveals some of his social engineering tricks he uses to get inside corporate campuses without explicit permission.
Capping the film are interview segments with security expert Bruce Schneier who says “the Internet is the greatest Generation Gap since Rock N Roll,” and that our kids, who grew up with this technology already available to them, will be the best to decide how electronic devices should be used going forward.
Moss agrees: “People can’t control what they don’t understand. How do you evaluate the risk of a computer controlled car? Well, people don’t really know. We’ve never had computer controlled cars before.”
I should disclose that I am one of the handful of supporting computer security experts that appear throughout Code2600. Although my interview segments were shot at Black Hat DC back in January 2010, they hold up well today. Indeed all of the interviews Zerechak captured in the three and half years he worked on the film appear eerily prescient today.
Since premiering at the Cinefest Film Festival in San Jose, California, last March, Code 2600 has enjoyed a limited run exclusively in film festivals around the country. At the Atlanta Film Festival the documentary won a coveted Grand Jury Award. Zerechak is currently working on a major film distribution deal so hopefully Code 2600 will receive the wider audience it deserves. In the meantime, you can see it next Friday night, 8pm, July 27, 2012, at the Rio Hotel in Las Vegas, Nevada. Admission to DefCon 20 is $200, cash only (of course).
This blog also appeared on Forbes.com
In the new motion picture Skyfall, James Bond uses fewer gadgets than in previous films, but a future 007 might not have to rely upon Q at all, instead taking advantage of ordinary gadgets, according to one researcher. On Wednesday at the Amphion Forum in San Francisco researcher Ang Cui demonstrated an attack on common Cisco-branded Voice over IP (VoIP) phones that could easily eavesdrop on private conversations remotely.
Cui, a fifth year grad student from the Columbia University Intrusion Detection Systems Lab and co-founder of Red Balloon Security, has pioneered an academic career attacking common embedded systems, such as routers, printers and now phones. He repeatedly called these devices “general-purpose computers,” forcing his audience to shift paradigms and understand that the devices that now surround us are, for the most part, insecure by design.
Update: Cisco’s statement added at the end of the article, along with clarifications from Salvatore Stolfo.
In research done in 2009-2010, Ang along with Salvatore Stolfo, also from Columbia University, conducted a wide-area scan of the Internet, pinging nearly every IPv4 address, tracking vulnerabilities. What Cui and Stolfo found was roughly 20 percent (or 1 in 5) of every Internet-connected embedded system contained trivial vulnerabilities they could later exploit. Often these were something as simple as a lack of authentication.
Compromising commonly found gadgets for espionage apparently dates back a few decades. Cui cited Project Gunman, details of which can be found in a recently declassified NSA document. During the Cold War, the Soviet Union provided the US Embassy in Moscow with IBM Selectric typewriters. It wasn’t until the 1980s that the US discovered that the bottom panel of these typewriters had been hollowed out and filled in with electronics that recorded the exact position of the typeball and thus could relay what messages were being written via a radio transmitter to a vehicle parked outside.
Last year Cui produced what he called Project Gunman v2 where a laser printer firmware update could be compromised to include additional (and possibly malicious) code. Printers, he and Stolfo argued, are the logical successors to the typewriter. Someone could now remotely compromise a printer located within the organization’s firewall, and eavesdrop on documents being printed or stored — without ever setting foot on the premises. The compromised printer could also launch attacks on the internal network.
The demonstration at the Amphion Forum in San Francisco on Wednesday took such an attack further. Cisco Unified Voice over IP (VoIP) phones (also known as Cisco TNP phones), Cui said, appear within the U.S. federal government, and he showed photos of various high-ranking offices attesting to this. He argued that the same logic used in compromising a printer could be applied to compromising the 7900 series of these VoIP phones.
At a bare minimum both printers and VoIP phones contain a system-on-a chip (SOC), a Flash ROM, and RAM. In the case of the phone there’s an additional element, the Off Hook Switch. This tells the user when the microphone is active, among other things. If this feature could be compromised, Cui said, then all conversationsâ€”not just when the end user wants to make a callâ€”could be monitored remotely.
To present the demo, which had never been tried in a public forum before, Cui employed an external circuit board that he said James Bond would have no trouble inserting onto a telephone inside the target organization. Cui suggested he could be a job applicant to get inside or he could simply compromise the lobby phone. Once one phone is compromised, the entire network of phones could be vulnerable. He said later he could also perform the exploit remotely, no physical-world circuit boards necessary.
With the circuit board in place, Cui then used a self-created app on his mobile phone to connect to it and export the mic data from the compromised phone sitting on a table next to the speaker’s dais where the off hook mic now captured Cui’s every word. After passing the mic data over the Internet to Google’s Speech to Text service, he then projected on a screen behind him a transcript of his spoken words, each appearing after a slight delay. He said that he could also bypass Google and simply capture the audio file as an “automatic blackmail device.”
Cui did not specify the exact vulnerability. He mentioned early on that Cisco had a default password baked into each phone, but the vulnerability he used was a certain syscall that allowed him to patch the device with arbitrary pieces of code. This is what allowed him to turn the Off Hook Switch into what he called a “funtenna”
“Instead of a phone, it’s more of a Walkie-Talkie,” he said, adding that he could use either the handset microphone or the speaker phone microphone to eavesdrop.
Cui said affected models include Cisco Unified IP Phone 7975G, 7971G-GE, 7970G, 7965G, 7962G, 7961G, 7961G-GE, 7945G, 7942G, 7941G, 7941G-GE, 7931G, 7911G, and 7906. Models 7971G-GE, 7970G, 7961G, 7961G-GE, 7941G, 7941G-GE, and 7906 are also vulnerable but have reached “end of life” status.
Cui pointed out that current security solutions don’t work with embedded systems. “Signing files doesn’t make the files secure,” he said. He also said that routers, printers, and phones are general-purpose computers without host based intrusion systems or antivirus protection built in so they make attractive targets. Further, they often lack encryption for data in motion or at rest.
Cui did credit Cisco for doing several things right in the design of their phones and for being responsive to his disclosure. Cisco already has a software patch for customers, and it will be generally available in January. He also said this research was carried out as part of a DARPA CRASH (from the I2O office) and IARPA Stonesoup Program and that he recently briefed agencies of the US federal government about the potential for a serious attack on all its Cisco Unified VoIP phones.
Cisco’s statement: â€śThe company maintains a very open relationship with the security community and we view this as vital to helping protect our customersâ€™ networks. We can confirm that workarounds and a software patch are available to address this vulnerability, and note that successful exploitation requires physical access to the device serial port, or the combination of remote authentication privileges and non-default device settings. Cisco thanks Ang Cui and Salvatore Stolfo for allowing our team to validate the vulnerability and prepare a software patch ahead of the presentation. A formal release note for customers was issued on November 2nd (bug id: CSCuc83860) and we encourage any customers with related questions to contact the Cisco TAC.â€ť
A version of this also appeared on Forbes.com
In the hills of near Cockeysville, MD, various teams of emergency responders were communicating through a central incident command system after a truck carrying chlorine collided with a train near a telecommunications facility. Although the emergency the first week in April was simulated, men and women in grade B Hazmat suits were nonetheless struggling to get on top of the unfolding disaster. These were not employees of Cockeysville emergency services, nor of the state of Maryland, nor the federal government. Theyâ€™re employees of Verizon.
â€śA lot of our facilities are in shared tenant buildings â€ť said Dick Price, Chief Business Continuity Officer at Verizon.â€ťOur major switch sites are located in industrial parks, and a majority of our transcontinental fiber routes run along major railways. All three of these cause us to have situations outside our control, and create situations where we may not be able to get inside our technical facilities based on problems others were having.â€ť
While itâ€™s tempting to say Verizonâ€™s obsession with disaster preparedness stems from 9/11, when one of its buildings at the World Trade Center site was temporarily knocked out of commission, Price offers a better response. He said that 9/11 was just the push that Verizon needed to enhance its emergency preparations.
Unlike most companies, Price explained, Verizon needs offices with racks of telecommunications equipment in several major and minor cities throughout the U.S. and other countries. Therefore the company must work with landlords and various local officials with the buildings it uses. â€śIf someone has a fire, and it knocks out our equipment, itâ€™s a problem.â€ť
Price gave one example where a landlord was doing asbestos abatement in the building and said they would move Verizon to another floor while the work was being done. â€śWeâ€™re not a law office,â€ť Price said, with a laugh, adding that there are often their racks of equipment are not easily moved.
Because of its unique circumstances, Verizon requires each of its business units to have a business continuity plan that is tested and signed off by the senior vice president of each unit. They not only have to have a strategic response plan for a data breach or insider theft, but also a tactical, in the field plan for external circumstances as well. Each unit has to know how to assist local emergency workers in the event of a natural or man-made disaster.
â€śThe plans are generic,â€ť Price explained, adding that the specifics of a disaster can be filled in as needed later.
As an example, Price said last winter there was a pipeline explosion outside of Chicago. The Verizon team showed up and was able to assist the local HAZMAT team, to keep them from inadvertently digging up Verizonâ€™s communications lines. It also provided help after Hurricanes Katrina and Rita. After Katrina, when hotels were at a premium, Verizon shipped in mobile housing units for their staff to use instead.
Verizon was able to help with such national disaster in part because it has its own HAZMAT team called Major Emergency Response Incident Team or MERIT. It consists of members who are former firefights and have gone through rigorous training. When on the field, they perform Incident Command System (ICS), accepted protocol for emergency workers. Price said they are accepted by most local emergency workers because Verizon arrives with their own equipment and know the jargon; in other words, they can start helping right away.
In the Cockeysville drill, members of the Baltimore Fire Department were on hand to observe the MERIT training. They also toured the Verizon building so they would be more familiar if at some point in the future they need to enter it an emergency. Ultimately, said Verizon, the point of these exercises held around the country are to build relationships with local emergency services.
For the Japanese Earthquake and Tsunami, Verizon opted not to send the MERIT team. â€śRadiological incidents are better handled by the military or the government,â€ť Price said. He and his team have participated in Department of Homeland Security disaster drills, such as the mock nuclear explosion drill held in Indianapolis last year. Verizon did, however, send satellite phones and dosimeters to its workers in Japan.
This disaster continuity planning goes well beyond the requirements for healthcare, insurance and the financial services industry. These industries are required by law and regulation to have a business continuity plan. There is no such requirement for telecommunications, Price said. Telecommunications is regulated by the F.C.C. at the federal level and by State Utilities at the local level. For now itâ€™s up to individual telecommunications to provide their own business continuity requirements.
Originally appeared on Forbes.com
In the first report of its kind, Debix, an identity protection services company, found the identity theft victimization rate among children was 10 percent last year, or fifty-one times the .2 percent of identity theft within the adult population.
The report, Child Identity Theft: New Evidence Indicates Identity Thieves are Targeting Children for Unused Social Security Numbers, found that out of population of 42,232 minors (those under the age of 18) whose identities were scanned by Debix during between 2009 and 2010, 4,311 had their Social Security numbers (SSNs) misused by other people. While all these individuals may have already had a reason to suspect as the result of a data breach that they or their children might have been at greater risk for identity fraud, the reportâ€™s focus on minors is, nonetheless, both alarming and welcome.
The reportâ€™s author, Richard Power, Distinguished Fellow, Director of Strategic Communications at Carnegie Mellon, agreed this was not a rigorous academic study but defended the reportâ€™s finding by saying â€śit wonâ€™t matter what the numbers are if it happens to you.â€ť He added that stolen identities from children might be the hottest ticket in the identity underground today. For example, included within the report are first-person accounts of an eight-year-old whose credit history included in a foreclosure on a home, a three-year-old in collection for unpaid utility bills, and a five-year-old whose SSN was used for a hunting license in another state. The youngest victim was three months old. The highest losses were attributed to a 16 year old who found nearly three-quarters of a million dollars in fraud losses pegged to the misuse of her SSN.
Powers, who had access to the sanitized data provided by Debix, confirmed that a significant portion of the childrenâ€™s stolen SSNs came came to light as the result of institutional database breaches. He said these breaches likely included insurance and healthcare companies where parents and guardians list dependents for coverage, although he found that 78 percent of the identity thefts against children had occurred before the data breach event. Additionally he said there were a few cases where friends and families took advantage of the pristine credit record of a minor, whatâ€™s called â€śfriendly fraud.â€ť
Debix used last weekâ€™s release of the report to announce AllClearID, the first free personal identity protection service in the industry. All other identify protection services require some form of subscription unless provided free for a period of time as the result of a data breach. Bo Holland, founder of Debix, has said he doesnâ€™t believe consumers should have to pay for identity theft, although AllClearID also sells a premium plan including actionable alerts and $1 million theft insurance that costs $9.95 a month. Through the month of April, Debix will allow parents to scan any US born children for free to find out if there is any misuse of their childâ€™s SSN.
â€śThe problem is,â€ť said Powers, â€śweâ€™ve never dealt with the underlying issues of authentication.â€ť He said we still use unencrypted passwords. And we still use SSNs, even though they were never intended to be used as an authentication method.
When asked why childrenâ€™s identity theft appears to be a far bigger problem than previously reported, Power said adequate research in this area hasnâ€™t been done for various social, business, and political reasons. He said thereâ€™s no single explanation why childrenâ€™s SSN are being targeted now except that criminal activity can go undetected for longer periods of time, often until the victim turns 18 years old.
There are millions of SSN numbers assigned to children but unused, perfect for any criminal wanting to escape detection. Starting with the Enumeration at Birth (EAB) initiative of 1989, parents of newborns in the US can request a Social Security Number (SSN) along with a birth certificate. Further, in 2009, researcher Alessandro Acquisti and others at Carnegie Mellon produced an algorithm that can accurately predict five of the nine digits of a SSN by knowing only a personâ€™s city and date of birth. In realizing that there are blocks of unused SSNs just sitting there some identity theft experts have called for legislation limiting the use of SSNs issued to minors until the child turns 17 years and 10 months old, a so called Minors 17-10 Database.
Power wrote in Fridayâ€™s Carnegie Mellon Cylab Blog that childrenâ€™s identity theft â€śshould be the subject of serious academic research; and that time and resources should be dedicated to a scientific analysis of this and similar data, to determine what it really means, and if the trends that seem to present themselves hold up under rigorous investigation.â€ť
That is indeed the question. It would be nice if Carnegie Mellon or some other established institution could confirm the Debix finding as this might help get federal legislation to support the protection of the Minors 17-10 Database.
Originally appeared on Forbes.com
When you type in Forbes.com (and not a string of hard to remember numbers) you can thank Paul Mockapetris, one of the creators of Domain Name System (DNS). DNS is fundamental to the Internet, instantly translating the common name to the machine address without the user being aware. Now, after 40 years, DNS is about to get security.
What took so long?
When DNS was first rolled out back in 1986, security wasn’t a driving force, according to Mockapetris. “The Wright Brothers didn’t have a drink cart or bathroom in their first plane,” he said. In other words, Mockapetris and others had to triage DNS implementation and always knew security would come long laterâ€”they didn’t necessarily know how much later.
The modern Internet inherited limitations from its precursor, ARPANet, so the original DNS architecture had to be designed around such things as the size of the packets sent across the network. “What we wanted to do was make sure that servers could function even if we were using the minimum size packet,” he said.
To handle this, he said, thirteen root servers were established, each containing the master list of address translations (more or less a master phone book). Hundreds of additional servers, each with copies of the master list, further distributed the load as more and users needed address translations. The thirteen are like brands, he said, with one run by ICANN, another run by Verisign, and so forth.
In defense of this tree-like architecture and its implementation, Mockapetris cited a surfboard analogy, saying they not only wanted to produce a surfboard that could handle the waves on the Internet of 1983 but also the 50-foot waves today. But back then they first needed to prove that the directory structure was flexible yet strong enough because critics said it was much to complicated. “[DNS] was barely accepted by the community at the time,” he said.
In the 1990s, concerns around DNS security were the subject of internal policy debate. Recently attackers began to poison the DNS cache, changing registrations, sometimes leading to denial of service attacks. “If I were running my own company,” he said, “I’d get my own copy of the root server,” which he said was relatively small amount of data.
He said attackers might someday take down a root server, but what they can do now is congest the pathway between you and it. So why not have your own? “One idea going forward,” he said, “is that the root server might go away,” and everyone would have their own local copy.
Today Mockapetris is chairman and chief scientist at Nominum, a company that provides DNS security features to ISPs and Enterprises, and looks forward to Domain Name System Security Extension (DNSSEC), which uses public key infrastructure to provide an authentication trail. “[DNSSEC] is the next step in this triage and it will enable some important things and solve a few important problems,” he said.
On July 15, 2010 the first changes toward implementing DNSSEC were made at the root level and are now trickling down through the thirteen root servers. For example, VeriSign, plans to have all of its .com and .net domains authenticated by next Thursday, March 31, 2011. It’s important to note that DNSSEC does not encrypt data, nor does it directly stop denial of service attacks. It does, however, create a layer of trust as addresses resolved by a cached server can be instantly compared with the original data on the master server.
Looking ahead, Mockapetris outlined a few DNSSEC use cases. Companies might, for example, publish the serial numbers of RFID tags if they knew that it would secure. “If I have an authenticated DNS, I can think about putting that database out there,” he said.
For example, a trucking company might allow firefighters to know the contents of a burning truck involved in an accident through RFID look up. Additionally, patients receiving medication in a hospital can be assured the nurse has matched the RFID of the drug with their patient record, potentially eliminating one source of medication mix-up.
This blog originally appeared on Forbes.com
When a computer crashes, our instinct is to reboot and not to question its root cause. But perhaps we should try to understand our failures before trying to forget them. Paul Kocher, president and chief scientist, of Cryptography Research, Inc. in San Francisco thinks that computer security industryâ€™s understanding of failure is still in its infancy, and that security practitioners today should try to learn from other industries that have greatly improved their risk profiles and consumerâ€™s trust over the years. For example, the aviation industry.
In the 1940s â€śthere were about ten deaths per one hundred million passenger miles,â€ť he said. That meant the average passenger would expect to die for every ten million plane miles flown. Today when air travel is much more common most people have flown at least a million or so air miles. In terms of 1940s aviation, most of us would have a 1 in 5 chance of being dead because of a plane crash. With that track record, the aviation industry might not have survived or be as robust as it is today.
Yet we tolerate similar failures and crashes within the computer industry every day.
Kocher said thereâ€™s been a thousand-fold improvement in aviation safety over the years because every time a plane crashes, the industry doesnâ€™t say â€śOops, that piece of metal broke.â€ť Or â€śToo bad.â€ť Or â€śthe pilot made that dumb mistake because they didnâ€™t deal with the engine failure properly.â€ť Instead thereâ€™s a formal process that leads to exponential improvement in aviation safety.
Every aviation accident gets investigated, and often there is not one, but a number of root causes behind it. â€śItâ€™s is essentially impossible that one error can bring down an airplane today,â€ť he said, since three, four, or five failures usually compound on each other. With the mandatory use of black boxes, extensive field investigations, and expensive reconstructions, each aviation failure becomes less and less likely in the future.
â€śIn computer security weâ€™re going the other direction,â€ť Kocher said, because the industry doesnâ€™t take a professional, analytic view of failure. Some vendors will spend many months looking for problems that donâ€™t exist. On the other hand, some vendors will only fix the bugs and do no more.
â€śIn aviation industry thereâ€™s not an attempt to put gloss around aviation safety to try and convince consumers thereâ€™s no possibility of an airplane crash if you carry the magic wand in your hand,â€ť he said. Instead there are individuals and companies that try to gather as much information. They perform a root cause analysis and try to learn as much as they can from each failure.
On the other hand, Kocher said, within computer security if you go to ten practitioners and ask what should you do to solve your particular data security problem, youâ€™ll get ten difference answers. One or two of those solutions may work. Eight of the ten solutions may not.
He compared computer security to medicine in the 1820s â€śwhen you had snake oil being sold along with some things that worked well but we may not know why they work.â€ť Even when solutions do work, we often donâ€™t know enough about it to explain why they worked. After more than fifty years, we donâ€™t yet understand the root causes of computer failure.
Kocher cites Mooreâ€™s Law, which states that the number of transistors placed on a chip will double every two years. Mooreâ€™s Law allows for the inexpensive installation of many additional layers of protection. That way if one piece fails the others will ensure that the overall security properties are met. Eventually if you build up enough barriers â€śit works but it is not very elegant,â€ť he said. But â€śitâ€™s like putting thirty layers of concrete bunker around your house, a wooden one, a steal one, etc., and then trying to make them interlock in various ways to keep your teenage daughter from leaving the house at night.â€ť
Kocher said itâ€™s important to understand the underlying motivations as well. Today the computer attacker has more incentive to learn about failures than the solutions vendors. The good guys collect their salaries whether or not a given solution worked. But the bad guys only get paid if they are successful.
This originally appeared on Forbes.com
The mobile phone provides additional customer security for financial transactions. Either by voice or text, banksâ€“in real timeâ€“may question account holders about large transfers of funds, potentially stopping fraud in process. While attending a recent public-private summit for the financial services industry, however, I heard of several ways that criminals are using the financial servicesâ€™ own call centers to circumvent these security controls.
The criminals start by acquiring your account information, either by placing keystroke loggers on your desktop or by deploying sniffer programs on the network or by using traditional phishing campaigns, which entice you to volunteer personal data. The criminals then masquerade as the account holder in a call to the customer service representative (CSR) at the targeted financial service institution.
In the past fraud at the ATMs has been relatively out of reach; the criminal might get your account number but not the associated PIN. One call center scam involves calling the CSR to change the PIN on an ATM card. By providing the call center with a name, address, even the 9-digits of a social security number and the targeted account number, the criminal is able to reset a 4-to-6-digit ATM or Credit Card PIN. After burning the stolen account data onto a blank magnetic stripe card, the criminal is then able to use this new PIN at any ATM.
Another way cybercriminals are using the call center is to simply change the contact phone number on an existing account. Most of us may not be accustomed to having banks contact us over the phone, but when thereâ€™s a particularly large transaction pending that is atypical most institutions will call or text to confirm. Now the criminals are changing the contact number on record to their own. Then, when the bank calls to confirm, the criminals approve the transfer because the financial institution has called them and not you. But the financial institutions are aware of this scam and have now started calling both the new and the old phone numbers for confirmation.
The criminals, of course, are one step ahead.
In one case, documented by Kim Zetter over at Wired, a doctorâ€™s home, office and cell numbers were jammed with repeated calls. Some were solicitations for sex websites, others pure silence. When customers complain to their telephone carrier , some telephone companies are now warned that there might be a financial crime associated with these calls.
All of these attacks expose weaknesses in the call centerâ€™s authentication of account holders. Financial institution call center customer service representatives often rely on the Automatic Number Identification (ANI), a phone number that appears with each incoming call. ANI is unrelated to CallerID, based on billing data, and thus can be captured by a CSR system even if the caller has blocked CallerID. Cybercriminals can and do manipulate ANI, making their call appear to be from anywhere, including the original registered contact phone number for a stolen account.
Challenge-response questions arenâ€™t the answer either. Cybercrminals can search for and often find the answers to many common questions online. For example, the password to Sarah Palinâ€™s Yahoo e-mail account was reset by someone guessing that she met her husband in high school.
Instead, institutions should use more than one type of call center authentication â€” ANI plus challenge-response questions where the questions are derived from past financial interactions with the customers (â€śWhere was your last ATM transaction?â€ť). Better yet, a mutually agreed upon password. Additionally institutions should automatically enroll account holders a package of security-based e-mail, text, and voice alerts including, but not limited to, changes to the physical address, the addition of a new person to an existing account, changes made to the contact phone number, and changes made to the PIN on an account.
In theory the average account holder should never see these alerts. But when they do hopefully theyâ€™ll realize that theyâ€™ll need to react and stop the fraud in real time.
Originally published in Forbes.com
When Benjamin Jun received a winter catalog in the mail from Nike with a personal URL on the cover, he didnâ€™t realize the wealth of information that would soon be available to him online. Jun, Vice President of technology at Cryptography Research, said that once online he was able to access a database showing what those he knew had purchased at various Nike stores. The site (and the entire winter campaign) is now down, but social media mashups such as this raise serious questions about companies that combine various databasesâ€“often without our direct consent.
This week Facebook has come under scrutiny for its new social media network. While logged into Facebook a simultaneous visit to one of Facebookâ€™s partner sites will reveal what your Facebook friends think of content on that site. The application also allows you to be interactive with your Facebook friends on the partner site, extending your social media experience.
However, the application also allows third parties to collect data about you and your friends, making public (in some cases) data that you may have marked as â€śfriends onlyâ€ť within the privacy settings on the Facebook side. More ominously Facebook is allowing its partner sites to store this demographic and marketing information indefinitely.
On Monday, four senators â€“including Charles Schumer of New York, Michael Bennet of Colorado, Mark Begich of Alaska and Al Franken of Minnesotaâ€”wrote to Facebook CEO Mark Zuckerberg with several privacy concerns, including asking why is it so difficult for customers to opt out of this new networking platform? Indeed, there are multiple settings within Facebook that must be tweaked in order to restrict private information.
The true dangers lie beneath the surface, beyond the mere marketing information of likes and dislikes.
In his talk last month at the 2010 RSA Conference, Jun spoke about the underlying assumptions being made by the site designers (not just at Nike and Facebook or their partners) who are incorporating mashup strategiesâ€“assumptions that might not be true. For example, the process of authorization for credentials on a social networking site is very different from the process of obtaining credentials on an e-commerce or online banking site. Site developers might be tempted to accept the APIs from a popular social media site as a way to increase revenue. Jun says the application designers should instead avoid or at least carefully consider the information being passed to them from another source.
To prevent unintended access, Jun advocates the creation of a â€śsession manager,â€ť one more hoop in the security chain. While itâ€™s always controversial to propose slowing down the consumer experience, the session manager would receive credentials from a third-party site, vet the data, then prompt for additional authentication if necessary.
Simply passing credentials from one site to another without reevaluating is dangerous, said Jun. He cites, in particular, the three Râ€™s of application development: redirects, renegotiation and reconnections. It is within these that gaps of trust among different systems that could allow bad actors access to sensitive data without proper authentication. Jun says in the case of the Nike solicitation for authentication there was only a unique URL on the cover of the catalog. Anyone reading the mailing could have gone online as him.
I for one do not need to know what news stories my friends are reading right nowâ€”let them surprise me later in a real (not virtual) conversation. Nor do I need to see what my friends are buying from an e-commerce site; really, Iâ€™m probably the last person to go online, learn that someone I know bought a pair of blue running shorts, size medium, and say â€śHey, order me a pair also!â€ť Just because the crowd is doing something doesnâ€™t mean Iâ€™m going to do it.
But for many, social networking is a way of life, a connection to others. For them, letâ€™s get the security right. With online data leakage occurring in new and surprising ways these days, why take the chance of sharing databases without providing additional back-end controls?
Originally published in Forbes.com
Beyond date of birth, what other personal information are we giving away on social network sites? In a talk a few weeks ago at the 2010 RSA Conference, security researcher Nitesh Dhanjani explored some non-traditional ways social networking could be used to profile individuals. He says just by studying your social networking presence one can identify, for example, pending business deals.
Dhanjani , who says his exploration is just a hobby, says he created a LinkedIN account for friend who didnâ€™t yet have an accountâ€”weâ€™ll call him â€śJackâ€ťâ€” then invited a mutual friend to join Jackâ€™s LinkedIn network. Within a short time, Jack acquired over 80 connections. Whatâ€™s surprising here, says Dhanjani, isnâ€™t that people linked to this fraudulent LinkedIn profile, but what information he as an impostor was able to glean about Jackâ€™s sphere of influence and business.
For example a competitor cybersquatting as Jack could now see Jackâ€™s clients. And, if Jackâ€™s company was about to be acquired (and that information was not yet public), an outsider might further see a recent influx of new connections from several people at a rival organization. The lesson here is to establish a presence on the major social networks, if only to stake claim to your name and reputation.
Even legitimate social networks can be hacked: someone could friend you just to get access to someone else you know. A law enforcement officer could be seeking information on a person of interest who happens to be part of your social network. According to the Electronic Freedom Foundation, social networks are being used by federal investigators, and last week the privacy organization released a 38-page PDF training course (obtained through the Federal Freedom of Information Act) that the EFF said was used for conducting investigations via social networks. While federal agents canâ€™t legally pretend to be someone else, they can request to be your friend and thus see all your posts, as well as those of others in your network. The EFF has been studying the privacy issues associated with this new form of surveillance. Often we accept people into our social networks by extension of trust, i.e. a friend of a friend, so a good rule of thumb might be to question how well you really know a person before accepting a new friend request.
But one doesnâ€™t have to join a social network to define your social network.
In his RSA presentation Dhanjani also demonstrated how outsiders can use publicly available social network information to define spheres of influence around a targeted individual. Popular social networks display the top 8 friends for a person as means of identifying exactly which John Smith youâ€™re currently looking at. By comparing the 8 friends on MySpace with the sample 8 friends on FaceBook, Dhanjani says he can map who are the critical contacts for the targeted individual. And by going one step further, by looking at the friends of those friends, one can further map who has the most influence with a targeted individual, their â€śposseâ€ť if you will, and do so without joining the network. A hacker using social engineering could then contact the targeted individual and say â€śJane said I should contact you about Alice.â€ť
Some may see all this as nothing new. Kevin Mitnick pioneered social engineering years ago. But now the means to profile someone is much more convenient. Be careful who you know and what you post online. You never know who might be listening.
Orginally published in Forbes.com
On Tuesday ThreatMetrix unveiled its new cloud-based transactional fraud network. Using its global database of device fingerprintsâ€”unique details about the PC, mobile phone or other Internet connecting deviceâ€“the company says it can detect fraudulent transactions without the need for acquiring personally identifiable information. By correlating incoming TCP/IP information with its database, for example, the company was recently able to identify and stop one malware-infected computer from making an online transaction.
ThreatMetrix, a Los Altos, California-based company, has been working on its fraud network for four or five years, says Alisdair Faulkner, chief product officer at the company. Whatâ€™s different from other transaction-based fraud networks is that ThreatMetrix uses device fingerprinting not necessarily transaction details for its fraud detection, providing a new set of tools for organizations to verify new accounts, authorize payments and transactions, and authorize user logins. Faulkner describes the new network as â€śfraud middlewareâ€ť in that it is designed to complement and integrate with existing fraud solutions.
It is very different solution from the approach taken by other transactional fraud networks such as ID Analytics, a San Diego, California-based company that uses data mining of consumer purchases to address identity fraud. By collecting transaction data, ID Analytics says it can profile a customerâ€™s typical purchasing behavior and flag an abnormal transaction as a possible fraudulent transaction. Unlike the credit bureaus which look at static elements of a personâ€™s profile (SSNs or open accounts) transactional fraud networks look at the live transaction data instead.
What ThreatMetix brings to the table is a proprietary device fingerprinting methodology that is able to probe beyond mere cookies and browser data to identify the machine being used for online access.
Clearly there is a need for such alternative analysis. Cybercrminals have shown increasing technical sophistication year after year. Being able to mask oneâ€™s hardware identity seems mere child-splay todayâ€“unless one has the sophisticated tools to analyze the output from a compromised machine.
By cataloging devices internationally, ThreatMetrix says it can see through a typical TCP/IP proxy and learn that a machine pretending to be a Windows XP machine located within the United States is in reality a Linux machine located in Vietnam. This could be a machine set to emulate a legitimate user. Or it could indicate a possible man-in-the-middle attack as well, where a third party is eavesdropping on a userâ€™s online session.
ThreatMetrix has also seen one device log into multiple financial services accounts within seconds of each other as well as numerous devices attempting to log into the same online account. This could indicate the use of a botnet, a rogue network of compromised PCs.
Despite the new avenues for fraud taken by cybercriminals today itâ€™s nice to the see the security industry thinking outside the box and offering innovative solutions.
Orginally published in Forbes.com