The Essential Burp Suite
OK we have download teh burp suite .let's begin start the tool
1、if we want to use the total memory enough ,we shoud define the Memory ,it's prudent to specify how much system RAM is allocated to burp suite in the beginning itself . there is a small caveat we should know.
like this : java - jar -Xmx2048M /path/to/Burpsutie.jar
or like this : java -jar -Xmx2G /path/to/Burpsuite.jar
when we meeting the problems that the Browser show a cryptic error ,which is as fellow :
Burp Proxy error : Permission denied:connect
why ? these matter. ,because the Browser use the interface IPv6 , so , all we need to do is tell java that we want to use the IPv4 interface by passing the fellowing parameter to the runtime : how we can do : as fellow :
java -Xmx2G -Djava.net.preferIPv4Stack=true -java /path/to/BurpSuite.jar
and then we shoud to configle the web Browser , the Burp Suite like a Proxy .we can use it to intercept all of we wanted information from client to server. besides .we also using invisible proxying ,to intercept traffic from such client .like as app application .
2、 speding some quality time figuring out the scope ,adding the required target URLs, and ensuring that our inclusion lists will ensure , will save us a lot of time and effort while using the other tools of the suite ,this might also be mandatory based on the testing activity we are planning to do .
3、Intruder
the tool is incredibly flexible and infinitely customizable
4、Scanner
an active scan is a great idea when we have full control what is being scanned , int the active scan mode ,Burp sends different kinds of requests to the application and based on the response ,to Verifies whether a particual kind of vulnerability exists or not .
the Scanner options can be customised to unerstand what type of values will be fuzzed as part of hte active scanning mode ,have a look at the following screedshot :
5、Spidering
Spidering or web Crawling ,as it is better know ,is the process of automatically following all the links on the web page to discover both statci dynamic web resources of the web application ,Burp uses the Spider tool to automate the mapping of the application
in the most sence ,if the website which the ownser doesn't want crawled by the Google or baidu , they add the following to robots.txt : User-agent: * Disallow: /