curlHandle configure ?options?
curlHandle perform
curlHandle getinfo curlinfo_option
curlhandle cleanup
curlhandle reset
curlHandle duhandle
curlHandle pause
curlHandle resume
curl::transfer ?options?
curl::version
curl::escape url
curl::unescape url
curl::curlConfig option
curl::versioninfo option
curl::easystrerror errorCode
RETURN VALUE
configure is called to set the options for the transfer. Most operations in TclCurl have default actions, and by using the appropriate options you can make them behave differently (as documented). All options are set with the option followed by a parameter.
Notes: the options set with this procedure are valid for the forthcoming data transfers that are performed when you invoke perform
The options are not reset between transfers (except where noted), so if you want subsequent transfers with different options, you must change them between the transfers. You can optionally reset all options back to the internal default with curlHandle reset.
curlHandle is the return code from the curl::init call.
OPTIONS
You hardly ever want this set in production use, you will almost always want this when you debug/report problems. Another neat option for debugging is -debugproc
NOTE: you will be passed as much data as possible in all invokes, but you cannot possibly make any assumptions. It may be nothing if the file is empty or it may be thousands of bytes.
proc ProgressCallback {dltotal dlnow ultotal ulnow}
In order to this option to work you have to set the noprogress option to '0'. Setting this option to the empty string will restore the original progress function.
If you transfer data with the multi interface, this procedure will not be called during periods of idleness unless you call the appropriate procedure that performs transfers.
You can pause and resume a transfer from within this procedure using the pause and resume commands.
debugProc {infoType data}
where infoType specifies what kind of information it is (0 text, 1 incoming header, 2 outgoing header, 3 incoming data, 4 outgoing data, 5 incoming SSL data, 6 outgoing SSL data).
This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407).
You might get some amounts of headers transferred before this situation is detected, like for when a "100-continue" is received as a response to a POST/PUT and a 401 or 407 is received immediately afterwards.
If the given URL lacks the protocol part ("http://" or "ftp://" etc), it will attempt to guess which protocol to use based on the given host name. If the given protocol of the set URL is not supported, TclCurl will return the unsupported protocol error when you call perform. Use curl::versioninfo for detailed info on which protocols that are supported.
NOTE: this the one option required to be set before perform is called.
Supported protocols are: http, https, ftp, ftps, scp, sftp, telnet, ldap, ldaps, dict. file and tftp. You can use the string 'all' to enable all of them
When you tell the extension to use a HTTP proxy, TclCurl will transparently convert operations to HTTP even if you specify a FTP URL etc. This may have an impact on what other features of the library you can use, such as quote and similar FTP specifics that will not work unless you tunnel through the HTTP proxy. Such tunneling is activated with proxytunnel
TclCurl respects the environment variables http_proxy, ftp_proxy, all_proxy etc, if any of those are set. The use of this option does however override any possibly set environment variables.
Setting the proxy string to "" (an empty string) will explicitly disable the use of a proxy, even if there is an environment variable set for it.
The proxy host string can be specified the exact same way as the proxy environment variables, include protocol prefix (http://) and embedded user + password.
WARNING: this option is considered obsolete. Stop using it. Switch over to using the share interface instead! See tclcurl_share.
Pass the number specifying what remote port to connect to, instead of the one specified in the URL or the default port for the used protocol.
Pass a number to specify whether the TCP_NODELAY option should be set or cleared (1 = set, 0 = clear). The option is cleared by default. This will have no effect after the connection has been established.
Setting this option will disable TCP's Nagle algorithm. The purpose of this algorithm is to try to minimize the number of small packets on the network (where "small packets" means TCP segments less than the Maximum Segment Size (MSS) for the network).
Maximizing the amount of data sent per TCP segment is good because it amortizes the overhead of the send. However, in some cases (most notably telnet or rlogin) small segments may need to be sent without delay. This is less efficient than sending larger amounts of data at a time, and can contribute to congestion on the network if overdone.
You can set it to the following values:
Undefined values of the option will have this effect.
When using NTLM, you can set domain by prepending it to the user name and separating the domain and name with a forward (/) or backward slash (\). Like this: "domain/user:password" or "domain\user:password". Some HTTP servers (on Windows) support this style even for Basic authentication.
When using HTTP and -followlocation, TclCurl might perform several requests to possibly different hosts. TclCurl will only send this user and password information to hosts using the initial host name (unless -unrestrictedauth is set), so if TclCurl follows locations to other hosts it will not send the user and password to those. This is enforced to prevent accidental information leakage.
In order to specify the password to be used in conjunction with the user name use the -password option.
It hould be used in conjunction with the -username option.
It should be used in same way as the -proxyuserpwd is used, except that it allows the username to contain a colon, like in the following example: "sip:user@example.com".
Note the -proxyusername option is an alternative way to set the user name while connecting to Proxy. It doesn't make sense to use them together.
The difference is only for URLs that contain a query-part (a '?'-letter and text to the right of it).
TclCurl now supports this quirk, and you enable it by using this option in both httpauth and proxyauth.
The methods are those listed above for the httpauth option. As of this writing, only Basic and NTLM work.
This is a request, not an order; the server may or may not do it. This option must be set or else any unsolicited encoding done by the server is ignored. See the special file lib/README.encoding in libcurl docs for details.
NOTE: this means that the extension will re-send the same request on the new location and follow new Location: headers all the way until no more such headers are returned. -maxredirs can be used to limit the number of redirects TclCurl will follow.
NOTE: TclCurl can limit what protocols it will automatically follow. The accepted protocols are set with -redirprotocols and excludes the FILE and SCP protocols by default.
Controls how TclCurl acts on redirects after POSTs that get a 301 or 302 response back. A "301" as parameter tells the TclCurl to respect RFC 2616/10.3.2 and not convert POST requests into GET requests when following a 301 redirection. Passing a "302" makes TclCurl maintain the request method after a 302 redirect. "all" is a convenience string that activates both behaviours.
The non-RFC behaviour is ubiquitous in web browsers, so the extension does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection.
This option is meaningful only when setting -followlocation
The option used to be known as -post301, which should still work but is know deprecated.
An 1 tells TclCurl to respect RFC 2616/10.3.2 and not convert POST requests into GET requests when following a 301 redirection. The non-RFC behaviour is ubiquitous in web browsers, so the conversion is done by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when setting -followlocation.
This option is deprecated starting with version 0.12.1, you should use -upload.
Use the -postfields option to specify what data to post and -postfieldsize to set the data size. Optionally, you can provide data to POST using the -readproc options.
You can override the default POST Content-Type: header by setting your own with -httpheader.
Using POST with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with -httpheader as usual.
If you use POST to a HTTP 1.1 server, you can send data without knowing the size before starting the POST if you use chunked encoding. You enable this by adding a header like "Transfer-Encoding: chunked" with -httpheader. With HTTP 1.0 or without chunked transfer, you must specify the size in the request.
When setting post to an 1 value, it will automatically set nobody to 0.
NOTE: if you have issued a POST request and want to make a HEAD or GET instead, you must explicitly pick the new request type using -nobody or -httpget or similar.
This is a normal application/x-www-form-urlencoded kind, which is the most commonly used one by HTML forms.
If you want to do a zero-byte POST, you need to set -postfieldsize explicitly to zero, as simply setting -postfields to NULL or "" just effectively disables the sending of the specified string. TclCurl will instead assume that the POST data will be send using the read callback!
Using POST with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with -httpheader as usual.
Note: to make multipart/formdata posts (aka rfc1867-posts), check out -httppost option.
This is the only case where the data is reset after a transfer.
First, there are some basics you need to understand about multipart/formdata posts. Each part consists of at least a NAME and a CONTENTS part. If the part is made for file upload, there are also a stored CONTENT-TYPE and a FILENAME. Below, we'll discuss on what options you use to set these properties in the parts you want to add to your post.
The list must contain a 'name' tag with the name of the section followed by a string with the name, there are three tags to indicate the value of the section: 'value' followed by a string with the data to post, 'file' followed by the name of the file to post and 'contenttype' with the type of the data (text/plain, image/jpg, ...), you can also indicate a false file name with 'filename', this is useful in case the server checks if the given file name is valid, for example, by testing if it starts with 'c:\' as any real file name does or if you want to include the full path of the file to post. You can also post the content of a variable as if it were a file with the options 'bufferName' and 'buffer' or use 'filecontent' followed by a file name to read that file and use the contents as data.
Should you need to specify extra headers for the form POST section, use 'contentheader' followed by a list with the headers to post.
Please see 'httpPost.tcl' and 'httpBufferPost.tcl' for examples.
If TclCurl can't set the data to post an error will be returned:
The headers included in the linked list must not be CRLF-terminated, because TclCurl adds CRLF after each header item. Failure to comply with this will result in strange bugs because the server will most likely ignore part of the headers you specified.
The first line in a request (containing the method, usually a GET or POST) is not a header and cannot be replaced using this option. Only the lines following the request-line are headers. Adding this method line in this list of headers will only cause your request to send an invalid header.
NOTE:The most commonly replaced headers have "shortcuts" in the options: cookie, useragent, and referer.
NOTE:The alias itself is not parsed for any version strings. Before version 7.16.3, TclCurl used the value set by option httpversion, but starting with 7.16.3 the protocol is assumed to match HTTP 1.0 when an alias matched.
If you need to set mulitple cookies, you need to set them all using a single option and thus you need to concatenate them all in one single string. Set multiple cookies in one string like this: "name1=content1; name2=content2;" etc.
Note that this option sets the cookie header explictly in the outgoing request(s). If multiple requests are done due to authentication, followed redirections or similar, they will all get this cookie passed on.
Using this option multiple times will only make the latest string override the previous ones.
Given an empty or non-existing file, this option will enable cookies for this curl handle, making it understand and parse received cookies and then use matching cookies in future requests.
If you use this option multiple times, you add more files to read.
Using this option also enables cookies for this session, so if you, for example, follow a location it will make matching cookies get sent accordingly.
TclCurl will not and cannot report an error for this. Using 'verbose' will get a warning to display, but that is the only visible feedback you get about this possibly lethal situation.
When setting httpget to 1, nobody will automatically be set to 0.
The specified block size will only be used pending support by the remote server. If the server does not return an option acknowledgement or returns an option acknowledgement with no blksize, the default of 512 bytes will be used.
The address can be followed by a ':' to specify a port, optionally followed by a '-' to specify a port range. If the port specified is 0, the operating system will pick a free port. If a range is provided and all ports in the range are not available, libcurl will report CURLE_FTP_PORT_FAILED for the handle. Invalid port/range settings are ignored. IPv6 addresses followed by a port or portrange have to be in brackets. IPv6 addresses without port/range specifier can be in brackets.
Keep in mind the commands to send must be 'raw' ftp commands, for example, to create a directory you need to send mkd Test, not mkdir Test.
Valid SFTP commands are: chgrp, chmod, chown, ln, mkdir, pwd, rename, rm, rmdir and symlink.
All the quote options (-quote,-postquote and -prequote) now accept a preceeding asterisk before the command to send when using FTP, as a sign that TclCurl shall simply ignore the response from the server instead of treating it as an error. Not treating a 400+ FTP response code as an error means that failed commands will not abort the chain of commands, nor will they cause the connection to get disconnected.
This causes an FTP NLST command to be sent. Beware that some FTP servers list only files in their response to NLST, they might not include subdirectories and symbolic links.
This setting also applies to SFTP-connections. TclCurl will attempt to create the remote directory if it can't obtain a handle to the target-location. The creation will fail if a file of the same name as the directory to create already exists or lack of permissions prevents creation.
If set to 2, TclCurl will retry the CWD command again if the subsequent MKD command fails. This is especially useful if you're doing many simultanoeus connections against the same server and they all have this option enabled, as then CWD may first fail but then another connection does MKD before this connection and thus MKD fails but trying CWD works.
This option has no effect if PORT, EPRT or EPSV is used instead of PASV.
Alternatively, and what seems to be the recommended way, you can set the option to one of these values:
Pass TclCurl one of the values from below, to alter how TclCurl issues "AUTH TLS" or "AUTH SSL" when FTP over SSL is activated (see -ftpssl).
You may need this option because of servers like BSDFTPD-SSL from http://bsdftpd-ssl.sc.ru/ "which won't work properly when "AUTH SSL" is issued (although the server responds fine and everything) but requires "AUTH TLS" instead".
NOTE: TclCurl does not do a complete ASCII conversion when doing ASCII transfers over FTP. This is a known limitation/flaw that nobody has rectified. TclCurl simply sets the mode to ascii and performs a standard transfer.
Ranges only work on HTTP, FTP and FILE transfers.
For FTP, set this option to -1 to make the transfer start from the end of the target file (useful to continue an interrupted upload).
Note that TclCurl will still act and assume the keyword it would use if you do not set your custom and it will act according to that. Thus, changing this to a HEAD when TclCurl otherwise would do a GET might cause TclCurl to act funny, and similar. To switch to a proper HEAD, use -nobody, to switch to a proper POST, use -post or -postfields and so on.
To change request to GET, you should use httpget. Change request to POST with post etc.
This option is mandatory for uploading using SCP.
Using PUT with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with -httpheader as usual.
If you use PUT to a HTTP 1.1 server, you can upload data without knowing the size before starting the transfer if you use chunked encoding. You enable this by adding a header like "Transfer-Encoding: chunked" with -httpheader. With HTTP 1.0 or without chunked transfer, you must specify the size.
NOTE: The file size is not always known prior to download, and for such files this option has no effect even if the file transfer ends up being larger than this given limit. This concerns both FTP and HTTP transfers.
The last modification time of a file is not always known and in such instances this feature will have no effect even if the given time condition would not have been met. getinfo conditionunmet can be used after a transfer to learn if a zero-byte successful "transfer" was due to this condition not matching.
In unix-like systems, this might cause signals to be used unless -nosignal is used.
When reaching the maximum limit, TclCurl closes the oldest connection in the cache to prevent the number of open connections to increase.
Note: if you have already performed transfers with this curl handle, setting a smaller maxconnects than before may cause open connections to unnecessarily get closed.
Note that if you add this easy handle to a multi handle, this setting is not being acknowledged, instead you must configure the multi handle its own maxconnects option.
In unix-like systems, this might cause signals to be used unless -nosignal is set.
With NSS this is the nickname of the certificate you wish to authenticate with.
NOTE:The format "ENG" enables you to load the private key from a crypto engine. in this case -sslkey is used as an identifier passed to the engine. You have to set the crypto engine with -sslengine. The "DER" format key file currently does not work because of a bug in OpenSSL.
You never need a pass phrase to load a certificate but you need one to load you private key.
This option used to be known as -sslkeypasswd and -sslcertpasswd.
NOTE:If the crypto device cannot be loaded, an error will be returned.
NOTE:If the crypto device cannot be set, an error will be returned.
When negotiating an SSL connection, the server sends a certificate indicating its identity. TclCurl verifies whether the certificate is authentic, i.e. that you can trust that the server is who the certificate says it is. This trust is based on a chain of digital signatures, rooted in certification authority (CA) certificates you supply.
TclCurl uses a default bundle of CA certificates that comes with libcurl but you can specify alternate certificates with the -cainfo or the -capath options.
When -sslverifypeer is nonzero, and the verification fails to prove that the certificate is authentic, the connection fails. When the option is zero, the connection succeeds regardless.
Authenticating the certificate is not by itself very useful. You typically want to ensure that the server, as authentically identified by its certificate, is the server you mean to be talking to, use -sslverifyhost to control that.
When built against NSS this is the directory that the NSS certificate database resides in.
This option apparently does not work in Windows due to some limitation in openssl.
This option is OpenSSL-specific and does nothing if libcurl is built to use GnuTLS.
When negotiating an SSL connection, the server sends a certificate indicating its identity.
When -sslverifyhost is set to 2, that certificate must indicate that the server is the server to which you meant to connect, or the connection fails.
TclCurl considers the server the intended one when the Common Name field or a Subject Alternate Name field in the certificate matches the host name in the URL to which you told Curl to connect.
When set to 1, the certificate must contain a Common Name field, but it does not matter what name it says. (This is not ordinarily a useful setting).
When the value is 0, the connection succeeds regardless of the names in the certificate.
The default is 2.
This option controls the identity that the server claims. The server could be lying. To control lying, see sslverifypeer.
For OpenSSL and GnuTLS valid examples of cipher lists include 'RC4-SHA', 'SHA1+DES',
You will find more details about cipher lists on this URL:
http://www.openssl.org/docs/apps/ciphers.html
For NSS valid examples of cipher lists include 'rsa_rc4_128_md5', 'rsa_aes_128_sha',
etc. With NSS you don't add/remove ciphers. If you use this option then all known
ciphers are disabled and only those passed in are enabled.
You'll find more details about the NSS cipher lists on this URL:
http://directory.fedora.redhat.com/docs/mod_nss.html
The known key types are: "rsa", "rsa1" and "dss", in any other case "unknown" is given.
TclCurl opinion about how they match may be: "match", "mismatch", "missing" or "error".
The procedure must return:
Any other value will cause the connection to be closed.
CURLOPT_FRESH_CONNECT, CURLOPT_FORBID_REUSE, CURLOPT_PRIVATE, CURLOPT_SSL_CTX_FUNCTION, CURLOPT_SSL_CTX_DATA, CURLOPT_SSL_CTX_FUNCTION and CURLOPT_CONNECT_ONLY, CURLOPT_OPENSOCKETFUNCTION, CURLOPT_OPENSOCKETDATA.
It must be called with the same curlHandle curl::init call returned. You can do any amount of calls to perform while using the same handle. If you intend to transfer more than one file, you are even encouraged to do so. TclCurl will then attempt to re-use the same connection for the following transfers, thus making the operations faster, less CPU intense and using less network resources. Just note that you will have to use configure between the invokes to set options for the following perform.
You must never call this procedure simultaneously from two places using the same handle. Let it return first before invoking it another time. If you want parallel transfers, you must use several curl handles.
The following information can be extracted:
In order for this to work you have to set the -filetime option before the transfer.
NOTE: this option is only available in libcurl built with OpenSSL support.
Re-initializes all options previously set on a specified handle to the default values.
This puts back the handle to the same state as it was in when it was just created with curl::init.
It does not change the following information kept in the handle: live connections, the Session ID cache, the DNS cache, the cookies and shares.
You can also get the getinfo information by using -infooption variable pairs, after the transfer variable will contain the value that would have been returned by $curlHandle getinfo option.
Applications should use this information to judge if things are possible to do or not, instead of using compile-time checks, as dynamic/DLL libraries can be changed independent of applications.