Mastering curl: interactive text guide

I recently watched the 3.5-hour workshop Mastering the curl command line by Daniel Stenberg, the author of curl. The video was awesome and I learned a ton of things, so I wanted a (shortened) text version for future reference. Here it is.

I've also included some interactive examples, so you can try out different curl commands as you read.


The curl project

Curl started in 1996 (back then it was called httpget). It was created by Rafael Sagula, who later transferred the project to Daniel. Daniel renamed the project to curl in 1998. In 2000, he extracted the networking functionality into the libcurl library, and curl itself became a command-line wrapper around it.

Curl stands for "client for URLs". It's a tool for client-side internet transfers with URLs. An internet transfer is an upload or download using a specific protocol (curl supports quite a few), where the endpoint is identified by a URL.

Curl is open sourced under the (slightly modified) MIT License. It accepts contributions from anyone, no paperwork required. There are about 15 people who can accept PRs. The lead (and only full-time) developer is Daniel.

Curl supports a crazy number of protocols, from HTTP, FTP and TELNET to IMAP, LDAP and GOPHER. It runs on 92 operating systems and has over 20 billion installations worldwide.

Curl + libcurl is about 160K lines of code. Curl is released every 8 weeks and has about 250 releases as of August 2023.

Release cycle
After the release, there is a 10-day cool-down, followed by a 21-day feature window, followed by a 25-day feature freeze.

Curl has an extensive reference documentation. To see the short version, try this:

curl --help

To see the full version, try curl --manual (be careful, it's huge).

There is also a book named Everything curl available online, and commercial support provided by the company Daniel works for.

Command line options

Curl performs internet transfers, where it acts as a client, uploading or downloading data from a remote server. The data can be anything: text, images, audio, video, and so on.

Curl supports both unauthenticated protocols (such as HTTP or FTP) and their authenticated counterparts (such as HTTPS or FTPS). Data transferred over an unauthenticated protocol can be intercepted and tampered with, so it's better to always use authenticated ones. Curl can disable server verification with an --insecure flag, but you better not do this in production.

Curl currently has about 250 options, and the number is growing at a rate of ≈10/year.

There are short command line options (single dash):

curl -V

And long options (double dash):

curl --version

All options are available in "long" format, but only some of them have "short" counterparts.

Command-line options
The number of options keeps growing.

Some options are of boolean type. You can turn such options on:

curl --silent

Or off:

curl --no-silent

Some options accept arguments:

curl --output /tmp/uuid.json

Arguments that contain spaces should be enclosed in quotes:

curl -o /dev/null --write-out "type: %{content_type}"


Curl supports URLs (URIs, really) similar to how RFC 3986 defines them:

  • scheme defines a protocol (like https or ftp). If omitted, curl will try to guess one.
  • user and password are authentication credentials (passing credentials in URLs is generally not used anymore for the security reasons).
  • host is the hostname, domain name or IP address of the server.
  • port is the port number. If omitted, curl will use the default port associated with the scheme (such as 80 for http or 443 for https).
  • path is the path to the resource on the server.
  • query is usually a sequence of name=value pairs separated by &.

For curl, anything starting with - or -- is an option, and everything else is a URL.


If you pass a lot of URL parameters, the query part can become quite long. The --url-query option allows you to specify query parts separately:

curl --url-query "name=Alice" --url-query "age=25"

Multiple URLs

Curl accepts any number of URLs, each of which requires a destination — stdout or a file. For example, this command saves the first UUID to /tmp/uuid1.json and the second UUID to tmp/uuid2.json:

  -o /tmp/uuid1.json
  -o /tmp/uuid2.json

&& cat /tmp/uuid1.json
&& cat /tmp/uuid2.json

(Here and beyond, I will sometimes show multiline commands for illustrative purposes. In reality curl expects a single line or line breaks with \)

The -O derives the filename from the URL:

curl --output-dir /tmp

&& ls /tmp

To write both responses to the same file, you can use redirection:

curl > /tmp/uuid.json

&& cat /tmp/uuid.json

URL globbing

Curl automatically expands glob expressions in URLs into multiple specific URLs.

For example, this command requests three different paths (al, bt, gm), each with two different parameters (num=1 and num=2), for a total of six URLs:

curl --output-dir /tmp -o "out_#1_#2.txt"{al,bt,gm}?num=[1-2]

&& ls /tmp

You can disable globbing with the --globoff option if []{} characters are valid in your URLs. Then curl will treat them literally.

Parallel transfers
Use --parallel (-Z) to tell curl process URLs concurrently.

Config file

As the number of options increases, the curl command becomes harder to decipher. To make it more readable, you can prepare a config file that lists one option per line (-- is optional):

output-dir /tmp

By default, curl reads the config from $HOME/.curlrc, but you can change this with the --config (-K) option:

curl --config /sandbox/.curlrc

Progress meters

Curl has two progress meters. The default is verbose:

curl --no-silent

(I have a silent option in my config file, so I have to turn it off explicitly; by default, it's not set, so --no-silent is not needed)

The other is compact:

curl --no-silent --progress-bar

The --silent option turns the meter off completely:

curl --silent

State reset

When you set options, they apply to all URLs curl processes. For example, here both headers are sent to both URLs:

  -H "x-num: one"
  -H "x-num: two"

Sometimes that's not what you want. To reset the state between URL calls, use the --next option:

  -H "x-num: one"
  -H "x-num: two"

Curl basics

Now that we understand how curl handles URLs and options, let's talk about specific features.


--version (-V) knows everything about the installed version of curl:

curl -V

It lists (line by line) ➊ versions of curl itself and its dependencies, ➋ the release date, ➌ available protocols, and ➍ enabled features.


--verbose (-v) makes curl verbose, which is useful for debugging:

curl -v

If --verbose is not enough, try --trace (the single - sends the trace output to stdout):

curl --trace -

Or --trace-ascii:

curl --trace-ascii -

Use --write-out (-w) to extract specific information about the response. It supports over 50 variables. For example, here we extract the status code and response content type:

  -w "\nstatus: %{response_code}\ntype: %{content_type}"

Or some response headers:

  -w "\ndate: %header{date}\nlength: %header{content-length}"


--remote-name (-O) tells curl to save the output to a file specified by the URL (specifically, by the part after the last /). It's often used together with --output-dir, which tells curl where exactly to save the file:

curl --output-dir /tmp -O

&& cat /tmp/uuid

If the directory does not exist, --output-dir won't create it for you. Use --create-dirs for this:

curl --output-dir /tmp/some/place --create-dirs

&& cat /tmp/some/place/uuid

You can use --max-filesize (in bytes) to limit the allowed response size, but often it isn't known in advance, so it may not work.


Sometimes the remote host is temporarily unavailable. To deal with these situations, curl provides the --retry [num] option. If a request fails, curl will try it again, but no more than num times:

curl -i --retry 3

(this URL fails 50% of the time)

You can set the maximum time curl will spend retrying with --retry-max-time (in seconds) or the delay between retries with --retry-delay (also in seconds):

curl -i --retry 3
  --retry-max-time 2
  --retry-delay 1

For curl, "request failed" means one of the following HTTP codes: 408, 429, 500, 502, 503 or 504. If the request fails with a "connection refused" error, curl will not retry. But you can change this with --retry-connrefused, or even enable retries for all kinds of problems with --retry-all-errors.


Curl is often used to download data from the server, but you can also upload it. Use the --upload-file (-T) option:

echo hello > /tmp/hello.txt &&

curl -T /tmp/hello.txt

For HTTP uploads, curl uses the PUT method.

Transfer controls

To stop slow transfers, set the minimum allowed download speed (in bytes per second) with --speed-limit. By default, curl checks the speed in 30 seconds intervals, but you can change this with --speed-time.

For example, allow no less than 10 bytes/sec during a 3-second interval:

curl -v --speed-limit 10 --speed-time 3

To limit bandwidth usage, set --limit-rate. It accepts anything from bytes to petabytes:

curl --limit-rate 3
curl --limit-rate 3k
curl --limit-rate 3m
curl --limit-rate 3g
curl --limit-rate 3t
curl --limit-rate 3p

Another thing to limit is the number of concurrent requests (e.g. if you download a lot of files). Use --rate for this. It accepts seconds, minutes, hours or days:

curl --rate 3/s[1-9].txt
curl --rate 3/m[1-9].txt
curl --rate 3/h[1-9].txt
curl --rate 3/d[1-9].txt

Name resolving

By default, curl uses your DNS server to resolve hostnames to IP addresses. But you can force it to resolve to a specific IP with --resolve:

curl --resolve

(this one fails because no one is listening on

Or you can even map a hostname:port pair to another hostname:port pair with --connect-to:

curl --connect-to

(this one works fine)

Connection-level settings
There are also some network connection-level settings.


To limit the maximum amount of time curl will spend interacting with a single URL, use --max-time (in fractional seconds):

curl --max-time 0.5

(this one fails)

Instead of limiting the total time, you can use --connect-timeout to limit only the time it takes to establish a network connection:

curl --connect-timeout 0.5

(this one works fine)


You almost never want to pass the username and password in the curl command itself. One way to avoid this is to use the .netrc file. It specifies hostnames and credentials for accessing them:

login alice
password cheese

login bob
password nuggets

Pass the --netrc option to use the $HOME/.netrc file, or --netrc-file to use a specific one:

echo -e "machine\nlogin alice\npassword cheese" > /tmp/netrc &&

curl --netrc-file /tmp/netrc

Exit status

When curl exits, it returns a numeric value to the shell. For success, it's 0, and for errors, there are about 100 different values.

For example, here is an exit status 7 (failed to connect to host):


You can access the exit status through the $? shell variable.


Curl is mostly used to work with HTTP, so let's talk about it.

HTTP/1.x is a plain-text protocol that describes the communication between the client and the server. The client sends messages like this:

POST /anything/chat HTTP/1.1
content-type: application/json
user-agent: curl/7.87.0

    "message": "Hello!"
  • The first line is a request line. The method (POST) defines the operation the client wants to perform. The path (/anything/chat) is the URL of the requested resource (without the protocol, domain and port). The version (HTTP/1.1) indicates the version of the HTTP protocol.
  • Next lines (until the empty line) are headers. Each header is a key-value pair that tells the server some useful information about the request. In our case it's the hostname of the server (, the type of the content (application/json) and the client's self-identification (user-agent).
  • Finally, there is the actual data that the client sends to the server.

Client receives messages like this in response:

HTTP/1.1 200 OK
date: Mon, 28 Aug 2023 07:51:49 GMT
content-type: application/json

    "message": "Hi!"
  • The first line is a status line. The version (HTTP/1.1) indicates the version of the HTTP protocol. The status code (200) tells whether the request was successful or not, and why (there are many status codes for different situations). The status message is a human-readable description of the status code (HTTP/2 does not have it).
  • Next lines (until the empty line) are headers. Similar to request headers, these provide useful information about the response to the client.
  • Finally, there is the actual data that the server sends to the client.

The HTTP protocol is stateless, so any state must be contained within the request itself, either in the headers or in the body.

HTTP/2, the successor to HTTP/1.1, is a binary protocol. However, curl displays HTTP/2 messages in plain text (just like HTTP/1.1), so we can safely ignore this fact for our purposes.

HTTP method

Curl supports all HTTP methods (sometimes called verbs).

GET (the default one, requires no options):


HEAD (-I/--head, returns headers only):

curl -I

POST (-d/--data for data or -F/--form for HTTP form):

curl -d "name=alice"

Or any other method with --request (-X):

curl -X PATCH -d "name=alice"

Response code

Typically, status codes 2xx (specifically 200) are considered "success", while 4xx are treated as client-side errors and 5xx as server-side errors. But curl doesn't care about codes: to it, every HTTP response is a success:

curl && echo OK

To make curl treat 4xx and 5xx codes as errors, use --fail (-f):

curl -f && echo OK

To print the response code, use --write-out with the response_code variable:

curl -w "%{response_code}"

Response headers

To display response headers, use --head (-i):

curl -i

Or save them to a file using --dump-header (-D):

curl -D /tmp/headers

&& cat /tmp/headers

Response body

Response body, sometimes called payload, is what curl outputs by default:


You can ask the server to compress the data with --compressed, but curl will still show it as uncompressed:

curl --compressed

(note how the Accept-Encoding request header has changed)


To ask the server for a piece of data instead of the whole thing, use the --range (r) option. This will cause curl to request the specified byte range.

For example, here we request 50 bytes starting with the 100th byte:

curl --range 100-150

Note that the server may ignore the ask and return the entire response.

If you are downloading data from a server, you can also use --continue-at (-C) to continue the previous transfer at the specified offset:

curl --continue-at 1000

HTTP versions

By default, curl uses HTTP/1.1 for the http scheme and HTTP/2 for https. You can change this with flags:


To find out which version the server supports, use the http_version response variable:

curl -w "%{http_version}"

Conditional requests

Conditional requests are useful when you want to avoid downloading already downloaded data (assuming it is not stale). Curl supports two different conditions: file timestamp and etag.

Timestamp conditions use --time-cond (-z).

Download the data only if the remote resource is newer (condition holds):

curl --time-cond "Aug 30, 2023"

Or older (condition fails):

curl -i --time-cond "-Aug 30, 2023"

Etag conditions are a bit more involved. An etag is a value returned by the server that uniquely identifies the current version of the requested resource. It is often a hash of the data.

To checks an etag, curl must first to save it with --etag-save:

curl --etag-save /tmp/etags

And use --etag-compare in subsequent requests:

curl --etag-save /tmp/etags -o /dev/null &&

curl -i --etag-compare /tmp/etags

Timestamp conditions rely on the Last-Modified response header, so if the server does not provide it, the resource will always be considered newer. The same goes for etag conditions and the Etag response header.


POST sends data to the server. By default, it's a set of key-value pairs encoded in a single string with a application/x-www-form-urlencoded Content-Type header.

You can use --data (-d) to specify individual key-value pairs (or the entire string):

curl -d name=alice -d age=25

To send data from a file, use @ with a file path. Use --header (-H) to change the Content-Type header with according to the file contents:

echo "Alice, age 25" > /tmp/data.txt &&

curl -d @/tmp/data.txt -H "content-type: text/plain"

--data-raw posts data similar to --data, but without the special interpretation of the @ character.

To post JSON data, use --json. It automatically sets the Content-Type and Accept headers accordingly:

curl --json '{"name": "alice"}'
JSON requests
Use jo and jq to simplify working with JSON.

To URL-encode data (escape all symbols not allowed in URLs), use --data-urlencode:

curl --data-urlencode "Name: Alice Barton"

Multipart formpost

POST can send data as a sequence of "parts" with a multipart/form-data content type. It's often used for HTML forms that contain both text fields and files.

Each part has a name, headers, and data. Parts are separated by a "mime boundary":

Content-Disposition: form-data; name="person"

Content-Disposition: form-data; name="secret"; filename="file.txt"
Content-Type: text/plain

contents of the file

To construct multipart requests with curl, use --form (F). Each of these options adds a part to the request:

touch /tmp/alice.png &&

curl -F name=Alice -F age=25 -F photo=@/tmp/alice.png


A redirect is when the server, instead of returning the requested resource, tells the client that the resource is located elsewhere (as indicated by the Location header). A redirect always has a 3xx response code.

Curl does not follow redirects by default, it returns the response as is:

curl -i

To make curl follow redirects, use --follow (-L):

curl -L

To protect against endless loop redirects, use --max-redirs:

curl -L --max-redirs 3


The PUT method is often used to send files to the server. Use --upload-file (-T) for this:

echo hello > /tmp/hello.txt &&

curl -T /tmp/hello.txt

Sometimes PUT is used for requests in REST APIs. For these, use --request (-X) to set the method and --data (-d) to send the data:

curl -X PUT -H "content-type: application/json"
  -d '{"name": "alice"}'


The HTTP protocol is stateless. Cookies are an ingenious way around this:

  1. The server wants to associate some state with the client session.
  2. The server returns that state in the Set-Cookie response header.
  3. The client recognizes the cookies and sends them back with each request in the Cookie request header.

Each cookie has an expiration date — either explicit one or "end of session" one (for browser clients, this is often when the user closes the browser).

Curl ignores cookies by default. To enable them, use the --cookie (-b) option. To make curl persist cookies between calls, use --cookie-jar (-c).

Although their reputation has been tarnished by the ubiquitous "cookie banners", cookies remain one of the finest examples of feature naming.

Here the server sets the cookie sessionid to 123456 and curl stores it in the cookie jar /tmp/cookies:

curl -b "" -c /tmp/cookies

&& cat /tmp/cookies

Subsequent curl calls with -b /tmp/cookies will send the sessionid cookie back to the server.

Curl automatically discards cookies from the cookie jar when they expire (this requires an explicit expiration date set by the server). To discard session-based cookies, use --junk-session-cookies (-j):

curl -j -b /tmp/cookies

Alternative services

The Alt-Svc HTTP response header indicates that there is another network location (an alternative service) that the client can use for future requests.

To enable alternative services, use --alt-svc. This tells curl to store the services in the specified file and consider them for future requests.

curl --alt-svc /tmp/altsvc -o /dev/null

&& cat /tmp/altsvc

HTTP Strict Transport Security

The HTTP Strict-Transport-Security response header (also known as HSTS) informs the client that the server should only be accessed via HTTPS, and that any future attempts to access it via HTTP should automatically be converted to HTTPS.

To make curl respect HSTS, use --hsts. This tells curl to store HSTS-enabled servers in the specified file and automatically convert http → https when accessing them.

curl --hsts /tmp/hsts -o /dev/null

&& cat /tmp/hsts

Other topics

Since the article is getting huge, I'm going to skip some parts of the video:

Curl supports protocols like SCP, SFTP, email (IMAP/POP3/SMTP), MQTT, TELNET, DICT, and WebSocket (experimental). Daniel goes over them in the video (FTP in detail, the rest very briefly).

Curl fully supports TLS (Transport Layer Security, the underlying protocol for HTTPS). You can set a specific minimal allowed TLS version, read certificates from a file, or use client certificates for mutual authentication.

Curl supports proxies (applications that act as intermediaries between the client and the server). There are HTTP and SOCKS proxies, and of course curl supports both, with or without authentication.

See the video at 1:36:09 – 2:20:40 and 3:24:41 - 3:32:40 if you are interested.


Curl is 25 years old in 2023. It has been growing and improving all these years, and the rate of development is only increasing. The Internet is constantly changing, with new protocols and ways of doing transfers popping up all the time. Curl needs to keep up, so it will continue to grow and add exciting new features!

There are two great resources if you want to dig deeper:

Here are some final words of wisdom for you:


Thank you for reading, and huge thanks to Daniel for his great workshop and the curl tool itself!


P.S. Interactive examples in this post are powered by codapi — an open source tool I'm building. Use it to embed live code snippets into your product docs, online course or blog.

 Subscribe to keep up with new posts.