Gathering information on a target is one of the most important skills of an ethical hacker. The more information you have on a target the easier they will be to compromise. This blog post will look at some of the techniques ethical hackers use for reconnaissance.
For many, recon is more of an overall, overarching term for gathering information on targets, whereas footprinting is more of an effort to map out, at a high level, what the landscape looks like. They are interchangeable terms in CEH parlance, but if you just remember that footprinting is part of reconnaissance, you’ll be fine.
Although some of this data may be a little tricky to obtain, most of it is relatively easy to get and is right there in front of you, if you just open your virtual eyes.
Vulnerability research is a vital step you need to learn and master. After all, how can you get ready to attack systems and networks if you don’t know what vulnerabilities are already defined? Additionally, I just believe this is the perfect time to talk about the subject.
Most of your vulnerability research will come down to a lot of reading, and most of that reading will come from websites devoted to informing the security crowd what’s out there. What you’ll be doing in your ongoing research is keeping track of the latest exploit news and of the status of any zero-day outbreaks in the field. A zero-day threat is an attack or exploit on a vulnerability that the vendor, developer, system owner, and security community didn’t even know existed. When these arise, developers of the operating system (OS) application or system have had no time (zero days) to work on a fix, so even though we all know there is a security flaw, there’s not a whole lot we can do about it yet.
In addition to discovering exploits (zero day and otherwise) your systems may be vulnerable to, you should also do all this research to figure out what recommendations are being made to deal with them. While a zero-day exploit may become common knowledge today without any known patch to address it, you can usually glean some advice from the vendor on ways to minimize the availability of successful exploitation. For example, Adobe has had what seems like millions of zero-day news releases over the past few years. Most times, successful exploitation of the vulnerability means a user has to actually open an infected PDF file for the zero-day vulnerability to even mean anything to anyone. A recommended course of action from Adobe usually includes things like, “Don’t open PDF files from untrusted sources.”
And remember, this research requires you to act, not sit around and wait for it to come to you. Sure, keep up with the news and read what’s going on, but just remember, by the time it gets to the front page of USA Today or FoxNews.com, it’s probably already been out in the wild for a long, long time. Smart ethical hackers and security personnel will keep themselves abreast of vulnerabilities in the world, and making use of just a few websites can make all the difference in having a leg up on the competition. Here are a few of the sites to keep in your favorites list:
Passive footprinting as defined by EC-Council has nothing to do with a lack of effort and even less to do with the manner in which you go about it (using a computer network or not). In fact, in many ways it takes a lot more effort to be an effective passive footprinter than an active one. Passive footprinting is all about the publicly accessible information you’re gathering and not so much about how you’re going about getting it. Methods include, but are not limited to, gathering of competitive intelligence, using search engines, perusing social media sites, participating in the ever-popular dumpster dive, gaining network ranges, and raiding DNS for information. As you can see, some of these methods can definitely ring bells for anyone paying attention and don’t seem very passive to commonsense-minded people anywhere, much less in our profession.
Passive information gathering definitely contains the pursuit and acquisition of competitive intelligence, and since it’s a direct objective within CEH and you’ll definitely see it on the exam, we’re going to spend a little time defining it here. Competitive intelligence refers to the information gathered by a business entity about its competitor’s customers, products, and marketing. Most of this information is readily available and can be acquired through different means. Not only is it legal for companies to pull and analyze this information, it’s expected behavior. You’re simply not doing your job in the business world if you’re not keeping up with what the competition is doing. Simultaneously, that same information is valuable to you as an ethical hacker, and there are more than a few methods to gain competitive intelligence.
The company’s own website is a great place to start. Think about it: What do people want on their company’s website? They want to provide as much information as possible to show potential customers what they have and what they can offer. Sometimes, though, this information becomes information overload. Just some of the open source information you can gather from almost any company on its site includes company history, directory listings, current and future plans, and technical information.
Directory listings become useful in social engineering, and you’d probably be surprised how much technical information businesses will keep on their sites. Designed to put customers at ease, sometimes sites inadvertently give hackers a leg up by providing details on the technical capabilities and makeup of the network.
Another absolute gold mine of information on a potential target are job boards. Go to CareerBuilder.com, Monster.com, Dice.com, or any of the multitude of others, and you can almost find everything you’d want to know about the company’s technical infrastructure. For example, a job listing that states “Candidate must be well versed in Windows 2008 R2, Microsoft SQL, and Veritas Backup services” isn’t representative of a network infrastructure made up of Linux servers. The technical job listings flat-out tell you what’s on the company’s network—and oftentimes what versions. Combine that with your already astute knowledge of vulnerabilities and attack vectors, and you’re well on your way to a successful pen test!
When it comes to active footprinting, per EC-Council, we’re really talking about social engineering and human interaction. In short, while passive measures take advantage of publicly available information that won’t (usually) ring any alarm bells, active footprinting involves exposing your information gathering to discovery. For example, we can scrub through DNS usually without anyone noticing a thing, but if you were to walk up to an employee and start asking them questions about the organization’s infrastructure, somebody is going to notice.
Social engineering has all sorts of definitions, but it basically comes down to convincing people to reveal sensitive information, sometimes without even realizing they’re doing it. There are millions of methods for doing this, and it can sometimes get really confusing. From the standpoint of active footprinting, the social engineering methods you should be concerned about involve human interaction. If you’re calling an employee or meeting an employee face to face for a conversation, you’re practicing active footprinting.
negotiating the Internet isn’t reliant on crazed directions. The road signs we have in place to get to our favorite haunts are all part of the Domain Naming System (DNS), and they make navigation easy. DNS, as you’re no doubt already aware, provides a name-to-IP-address (and vice versa) mapping service, allowing us to type in a name for a resource as opposed to its address. This also provides a wealth of footprinting information for the ethical hacker—so long as you know how to use it.
DNS is made up of servers all over the world. Each server holds and manages the records for its own little corner of the world, known in the DNS world as a namespace. Each of these records gives directions to or for a specific type of resource. Some records provide IP addresses for individual systems within your network, whereas others provide addresses for your e-mail servers. Some provide pointers to other DNS servers, which are designed to help people find what they’re looking for.
The only downside to this system is that the record types held within your DNS system can tell a hacker all she needs to know about your network layout. For example, do you think it might be important for an attacker to know which server in the network holds and manages all the DNS records? What about where the e-mail servers are? Heck, for that matter, wouldn’t it be beneficial to know where all the public-facing websites actually reside? All this can be determined by examining the DNS record types.
Originally started in Unix, whois has become ubiquitous in operating systems everywhere and has generated any number of websites set up specifically for that purpose. It queries the registries and returns all sorts of information, including domain ownership, addresses, locations, and phone numbers.
To try it for yourself, use your favorite search engine and look up whois. You’ll get millions of hits on everything from the use of the command line in Unix to websites performing the task for you.
Another useful tool in the DNS footprinting tool set is an old standby—a command-line tool people have used since the dawn of networking: nslookup. This is a command that’s part of virtually every operating system in the world, and it provides a means to query DNS servers for information. The syntax for the tool is fairly simple:
The command can be run as a single instance, providing information based on the options you choose, or you can run it in interactive mode, where the command runs as a tool, awaiting input from you.
For example, on a Microsoft Windows machine, if you simply type nslookup at the prompt, you’ll see a display showing your default DNS server and its associated IP address. From there, nslookup sits patiently, waiting for you to ask whatever you want (as an aside, this is known as interactive mode). Typing a question mark shows all the options and switches you have available.
Another option for viewing this information is the dig command utility. Native to Unix systems but available as a download for Windows systems (along with BIND 9), dig is used to test a DNS query and report the results. The basic syntax for the command looks like
where server is the name or IP of the DNS name server, name is the name of the resource you’re looking for, and type is the type of record you want to pull.
There are dozens of switches you can add to the syntax to pull more explicit information.
Another tool available for network mapping is traceroute (or tracert hostname on Windows systems), which is a command-line tool that tracks a packet across the Internet and provides the route path and transit times. It accomplishes this by using ICMP ECHO packets to report information on each “hop” (router) from the source to the destination. The TTL on each packet increments by one after each hop is hit and returns, ensuring the response comes back explicitly from that hop and returns its name and IP address. Using this, an ethical hacker can build a picture of the network. For example, consider a traceroute command output from my laptop here in Satellite Beach, Florida, to a local surf shop just down the road (names and IPs changed to protect the innocent).
A veritable cornucopia of information is displayed here. Notice, though, the entry in line 12, showing timeouts instead of information you’re used to seeing. This indicates, usually, a firewall that does not respond to ICMP requests—useful information in its own right.
This useful tactic in footprinting a target was popularized mainly in late 2004 by a guy named Johnny Long. Mr. Long was part of an IT security team at his job, and while performing pen tests and ethical hacking, he started paying attention to how the search strings worked in Google. See, the search engine has always had additional operators that were designed to allow you to fine-tune your search string. What Mr. Long did was simply apply that logic for a more nefarious purpose.
Suppose, for example, instead of just looking for a web page on boat repair or searching for an image of a cartoon cat, you decided to tell the search engine, “Hey, do you think you can look for any systems that are using Remote Desktop Web Connection?” Or how about, “Can you please show me any MySQL history pages so I can try to lift a password or two?” Amazingly enough, search engines can do just that for you, and more. The term this practice has become known by is Google hacking.
Innumerable websites are available to help you with Google hack strings. For example, from the Google Hacking Database (a site operated by Mr. Johnny Long and Hackers for Charity, www.hackersforcharity.org/ghdb/), try this string from wherever you are right now:
Basically we’re telling Google to go look for web pages that have TSWEB in the URL (indicating a remote access connection page), and we want to see only those that are running the default HTML page (default installs are common in a host of different areas and usually make things a lot easier for an attacker).
As you can see, Google hacking can be used for a wide range of purposes. For example, you can find free music downloads (pirating music is a no-no, by the way, so don’t do it).
Combine these with the advanced operators, and you can really dig down into some interesting stuff. Again, none of these search strings or “hacks” is illegal—you can search for anything you want (assuming, of course, you’re not searching for illegal content, but don’t take your legal advice from a certification study book). However, actually exploiting them without prior consent will definitely land you in hot water.
April 25th 2016
By QuBits with the help of Certified Ethical Hacking handbook