What Does Your User Agent Say About You?


A user agent is a computer program representing a person, for example, a browser in a Web context.

Besides a browser, a user agent could be a bot scraping webpages, a download manager, or another app accessing the Web. Along with each request they make to the server, browsers include a self-identifying User-Agent HTTP header called a user agent (UA) string. This string often identifies the browser, its version number, and its host operating system.

Spam bots, download managers, and some browsers often send a fake UA string to announce themselves as a different client. This is known as user agent spoofing.

The user agent string can be accessed with JavaScript on the client side using the navigator.userAgent property.

A typical user agent string looks like this: "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0".


User Agent String

Browser Data

User Agent Data

Paul Gonzalez
• Saturday, 21 November, 2020
• 11 min read

During the first browser war, many web servers were configured to send web pages that required advanced features, including frames, to clients that were identified as some version of Mozilla only. Other browsers were considered to be older products such as Mosaic, Cello, or Samba, and would be sent a bare-bones HTML document.

geo agent maps location user data flexoffers


Automated agents are expected to follow rules in a special file called robots.txt “. The popularity of various Web browser products has varied throughout the Web's history, and this has influenced the design of websites in such a way that websites are sometimes designed to work well only with particular browsers, rather than according to uniform standards by the World Wide Web Consortium (W3C) or the Internet Engineering Task Force (IETF).

Websites often include code to detect browser version to adjust the page design sent according to the user agent string received. Thus, various browsers have a feature to cloak or spoof their identification to force certain server-side content.

For example, the Android browser identifies itself as Safari (among other things) in order to aid compatibility. User agent sniffing is the practice of websites showing different or adjusted content when viewed with certain user agents.

An example of this is Microsoft Exchange Server 2003's Outlook Web Access feature. When viewed with Internet Explorer 6 or newer, more functionality is displayed compared to the same page in any other browsers.

Web browsers created in the United States, such as Netscape Navigator and Internet Explorer, previously used the letters U, I, and N to specify the encryption strength in the user agent string. Until 1996, when the United States government disallowed encryption with keys longer than 40 bits to be exported, vendors shipped various browser versions with different encryption strengths.

user agent linux string strings

^ a b RFC 3261, SIP: Session Initiation Protocol, IETF, The Internet Society (2002) ^ RFC 7231, Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content, IETF, The Internet Society (June 2014) ^ Net news Article Format. Browser Versions Carry 10.5 Bits of Identifying Information on Average “, Electronic Frontier Foundation, 27 January 2010.

I've been rejected until I come back with Netscape” ^ “Android Browser Reports Itself as Apple Safari”. ^ User Agent String explained: Android WebKit Browser”.

Mozilla/5.0 (Linux; U; Android 2.2; ends; HTC_DesireHD_A9191 Build/FRF91) Apple WebKit/533.1 (HTML, like Gecko) Version/4.0 Mobile Safari/533.1 ^ Emberton, Stephen. ^ “Chrome Phasing out Support for User Agent ".

The Chrome (or Chromium/Blink-based engines) user agent string is similar to Firefox’s. For compatibility, it adds strings like HTML, like Gecko and Safari.

The Opera browser is also based on the Blink engine, which is why it almost looks the same, but adds “Or/”. In this example, the user agent string is mobile Safari’s version.


I would like to store the user agent string, current time and opened URL (not the IP address or something else) in a database for each page visit to be able to build some statistical measurements. Scenario 3: in one dataset you record that an user agent was associated with order ID 123456 and in another dataset you record that order ID 123456 was for John Smith (plus address etc), then the user agent is personal data (it relates to the identifiable natural person).

(e) processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller; Point (f) of the first subparagraph shall not apply to processing carried out by public authorities in the performance of their tasks.

While GDPR is a foundational data privacy law in the EU, the privacy directive goes into more detail for internet and telecommunication services. An user agent string would usually fall under its definition of “traffic data because it's generally transmitted as an HTTP header.

For transmitting a communication, e.g. to have your web server respond with different content depending on mobile or desktop user agents; when the data is made anonymous, e.g. as part of aggregate statistics on browser versions; for billing purposes; or when the user consents, where “consent” is defined by the GDPR. I think that recording {URL, date, user agent} tuples for the purpose of building statistics is not personal data in your context, or is at least a processing of personal data that doesn't require identification per GDPR Art 11.

Such records would likely count as “anonymous” in the sense of privacy, so that you can collect such statistics without having to ask for consent. However, if an user agent is rare relative to the traffic volume on your site, it would still be possible to single out a particular user.

agent user wiki license sa author cc figure

Using streaming Bloom filters to avoid storing unique records, or using other privacy-preserving probabilistic data structures only storing denormalized records, and truncating/rounding timestamps to avoid linking events via their time. Instead of one table with a {URL, timestamp, user agent} per event you might have separate tables with {URL, time slot, counter} and {user agent, time slot, counter} aggregate records were the time slots are appropriately spaced for the traffic of the site.

Article 4(1) of the GDPR defines personal data as any information relating to an identified or identifiable natural person (brackets added). Recital 30 mentions internet protocol addresses as an example of data which might identify natural persons when combined with unique identifiers and other information received by the servers “.

The sample you show does not contain other information that used in combination with the IP address would result in, or facilitate, that identification. But it is important to know that using the correct Sergeant can help and make easy the scraping tasks of many websites.

The Sergeant is a text string that the client sends through the headers of a request, and serves as an identifier for the type of device, operating system and browser that we are using. This information tells the server that, for example, we are using Google Chrome 80 browser and a computer with Windows 10.

Because the Sergeant is a plaintext string it is easy to manipulate and thus trick the web server into believing that we are visiting it from a different device. Not setting an Sergeant in our requests will cause that our tools use a default one that in many cases is one that announces our presence as a Bot, which in many websites is not allowed and therefore it is possible that they can easily ban us.

agent strings parsing

Python Requests doesn't execute Javascript, so we will not be able to see the information that interests us, so let's try with another Sergeant. The answer is quite similar to requesting with a desktop browser, and this is due to the same thing, the server expects a smartphone to have JavaScript to display the page content.

In the examples we saw how popular websites have different responses depending on the device that visits them, and we can use this to our advantage to scrape them. Sergeant strings come in all shapes and sizes, and the number of unique user agents is growing all the time.

If you need to integrate the user agent parser directly into your website or system then it's very simple to use the API. This will let you do things like advanced filtering and searching, identify trends in user agents, perform statistical analysis and other interesting applications.

Most fields using product tokens also allow sub-products which form a significant part of the application to be listed, separated by white space. The tokens are typically listed by significance, however this is completely left up to software publisher.

In summary, it is not a very standardized format, and as we will see, has evolved into a fairly chaotic environment that can be only unraveled by sustained and dedicated attention to mapping and interpreting this entropy. One of the main use cases of an user agent parser is to identify and handle requests from certain types of traffic.

agent overview dataforce asap

This is particularly useful when dealing with the wide spectrum of devices in use today, and allows you to get as fine-grained as you like with your content targeting strategy. Outside of web optimization, this has obvious applications to the advertising sector, where the device can be useful as a criterion for targeting.

Bots and crawlers have User -Agents too, and can be identified accurately by a good device detection solution. Security is the other big area where being aware of the nature of traffic hitting your services is extremely important.

These range from search engines to link checkers, SEO tools, feed readers, scripts and other nefarious actors at large in the web landscape. Being able to distinguish between these different sources can provide significant savings in IT costs by detecting and identifying bot traffic to your site.

You would need to constantly update your regex rules as new devices, browsers and OSs are released, and then run tests to see if the solution still works well. At some point, this becomes a costly maintenance job, and, over time, a real risk that you are misdirecting or failing to detect much of your traffic. Accurately parsing User -Agents is one problem.

Device Atlas uses a Patricia train data structure to determine the properties of a device in the quickest and most efficient way. This is the reason why major companies rely on established solutions built on proven and patented technology like Device Atlas.


There's no “standard” way of writing an user agent string, so different web browsers use different formats (some are wildly different), and many web browsers cram loads of information into their user agents. Some mobile web browsers will let you change what the browser identifies itself as (i.e. “Mobile Mode” or “Desktop Mode”) in order to access certain websites that only allow desktop computers.

English French German Spanish Portuguese Slovak Anytime a search engine spider or other robot connects to your server, it leaves an user agent identity as well.

It is important to know about user agents because they can tell you a lot about your website’s visitors and what types of computers they are using. Web server statistical software records useragentdata and will display it for you in charts and graphs.

Other Articles You Might Be Interested In

01: Internet Explorer 11 User Agent String
02: Internet Explorer User Agent String Registry
03: Ipad Browser Change User Agent
04: Iphone 11 Pro Max User Agent
05: Iphone Xs Max Useragent
06: Search Party User Agent Mac Os
07: See My User Agent
08: See My User Agent String
09: Servicenow User Agent Ews Allow
10: Set Vivaldi User Agent To Chrome
1 -
2 -
3 -
4 -
5 -
6 -
7 -
8 -