The Wiert Corner – irregular stream of stuff

Jeroen W. Pluimers on .NET, C#, Delphi, databases, and personal interests

  • My badges

  • Twitter Updates

  • My Flickr Stream

  • Pages

  • All categories

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 1,679 other followers

Archive for the ‘cURL’ Category

Convert cURL command syntax to Python requests, Node.js code

Posted by jpluimers on 2019/07/26

Utility for converting curl commands to code

For my link archive: [WayBack] Convert cURL command syntax to Python requests, Node.js code

–jeroen

Posted in *nix, *nix-tools, cURL, Development, JavaScript/ECMAScript, Node.js, Power User, Python, Scripting, Software Development | Leave a Comment »

Transferring files from a Linux console: transfer.sh and anypaste.xyz

Posted by jpluimers on 2019/07/26

transfer.sh

anypaste.xyz

–jeroen

via: [WayBack] Interesting: Anypaste – Share And Upload Files To Compatible Hosting Sites Automatically… – DoorToDoorGeek “Stephen McLaughlin” – Google+

Posted in *nix, *nix-tools, bash, cURL, Power User | Leave a Comment »

wget and curl: downloads that sometimes fail

Posted by jpluimers on 2018/10/19

For my archive somewhere between cURL 7.21.0 and 7.34.0 it does not like to be started from an RDP based tsclient share:

C:\Users\jeroen\Downloads>\\tsclient\bin\curl.7.21.0.exe --remote-name https://www.xs4all.nl/index.html
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 86465    0 86465    0     0  60805      0 --:--:--  0:00:01 --:--:-- 70012

C:\Users\jeroen\Downloads>\\tsclient\bin\curl.7.34.0.exe --remote-name https://www.xs4all.nl/index.html
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: web.archive.org

C:\Users\jeroen\Downloads>\\tsclient\bin\curl.7.61.0.exe --remote-name https://www.xs4all.nl/index.html
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: www.xs4all.nl

C:\Users\jeroen\Downloads>copy \\tsclient\bin\curl.7.61.0.exe
        1 file(s) copied.

C:\Users\jeroen\Downloads>curl.7.61.0.exe --remote-name https://www.xs4all.nl/index.html
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    13    0    13    0     0     10      0 --:--:--  0:00:01 --:--:--    10

It fails the same way after net use B: \\tsclient\bin, so that does not matter.

The best link I could find until I got to the real problem was [WayBack] curl: (6) Could not resolve host: application – Stack Overflow which shows a different problem: properly quoting.

In addition to remote-name, you can also grab the file name from the headers using --remote-header-name, and --remote-time use the remote file time. The --location follows 302-redirects. You can see that in the example below which I build based on

[WayBack] unix – Curl to grab remote filename after following location – Stack Overflow: The remote side sends the filename using the Content-Disposition header.curl 7.21.2 or newer does this automatically if you specify –remote-header-name / -J.curl -O -J -L $url

C:\Users\jeroen\Downloads>b:\curl.7.21.0.exe --location --remote-name --remote-time --remote-header-name "https://web.archive.org/web/20180712073755if_/https://www.danielwolf.eu/?wpdmdl=1965"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 86465    0 86465    0     0  45748      0 --:--:--  0:00:01 --:--:-- 50772
curl: Saved to filename 'pkgWuppdiWP_DX102T_1-1-2.zip'

wget failed big time:

C:\Users\jeroen\Downloads>B:\wget.exe --no-check-certificate -v -v -v --content-disposition --restrict-file-names=windows "https://web.archive.org/web/20180712073755if_/https://www.danielwolf.eu/?wpdmdl=1965"
wget: Cannot read b:/.wgetrc (No such file or directory).
--2018-07-12 09:55:23--  https://web.archive.org/web/20180712073755if_/https://www.danielwolf.eu/?wpdmdl=1965
Resolving web.archive.org... 207.241.225.186
Connecting to web.archive.org|207.241.225.186|:443... failed: Invalid argument.
Retrying.

...

--2018-07-12 09:55:23--  (try:20)  https://web.archive.org/web/20180712073755if_/https://www.danielwolf.eu/?wpdmdl=1965
Connecting to web.archive.org|207.241.225.186|:443... failed: Invalid argument.
Giving up.

This is not caused by the filename (Windows does not like the ? question mark in output file names, so  – like & ampersand in file URLs – you have to quote the full URL, but also provide the --restrict-file-names=windows parameter; see [WayBack] wget – I can’t download files with “?” – Super User).

–jeroen

Posted in *nix, *nix-tools, cURL, Power User, wget | Leave a Comment »

https://altd.embarcadero.com/ TLS certificate does not match domain name

Posted by jpluimers on 2018/09/07

One of the domains not yet monitored at embarcaderomonitoring.wiert.me, was the altd download server for ISOs and installers on http and https level. Ultimately you want https, as most of these are about installers, so you do not want any man-in-the-middle to fiddle with them.

TLS on altd fails

Upitmerobot is not yet smart enough to check validity of TLS certificates on https connections.

Chrome, Firefox, Safari, Internet Explorer, wget, curl and ssllabs however are.

altd hides as much from itself as possible

Uptimerobot did not like monitoring the plain http://altd.embarcadero.com/ and https://altd.embarcadero.com/ URLs, because the altd is not browsable, so it tries to hide most of its structure from access. This means they both return an odd response:

Those responses are actually 404 errors (note the - minus sign after curl --trace-ascii: it sends the trace to stdout):

$ wget http://altd.embarcadero.com/
--2018-09-05 10:44:23-- http://altd.embarcadero.com/
Resolving altd.embarcadero.com (altd.embarcadero.com)... 88.221.144.40, 88.221.144.10
Connecting to altd.embarcadero.com (altd.embarcadero.com)|88.221.144.40|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2018-09-05 10:44:23 ERROR 404: Not Found.

$ curl --verbose http://altd.embarcadero.com/
*   Trying 88.221.144.40...
* TCP_NODELAY set
* Connected to altd.embarcadero.com (88.221.144.40) port 80 (#0)
> GET / HTTP/1.1
> Host: altd.embarcadero.com
> User-Agent: curl/7.54.0
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< Server: Apache
< Content-Type: text/html; charset=iso-8859-1
< Content-Length: 16
< Date: Wed, 05 Sep 2018 08:45:57 GMT
< Connection: keep-alive
< 
* Connection #0 to host altd.embarcadero.com left intact
File not found."

$ curl --trace-ascii - http://altd.embarcadero.com/
== Info:   Trying 88.221.144.40...
== Info: TCP_NODELAY set
== Info: Connected to altd.embarcadero.com (88.221.144.40) port 80 (#0)
=> Send header, 84 bytes (0x54)
0000: GET / HTTP/1.1
0010: Host: altd.embarcadero.com
002c: User-Agent: curl/7.54.0
0045: Accept: */*
0052: 
<= Recv header, 24 bytes (0x18)
0000: HTTP/1.1 404 Not Found
<= Recv header, 16 bytes (0x10)
0000: Server: Apache
<= Recv header, 45 bytes (0x2d)
0000: Content-Type: text/html; charset=iso-8859-1
<= Recv header, 20 bytes (0x14)
0000: Content-Length: 16
<= Recv header, 37 bytes (0x25)
0000: Date: Wed, 05 Sep 2018 08:47:19 GMT
<= Recv header, 24 bytes (0x18)
0000: Connection: keep-alive
<= Recv header, 2 bytes (0x2)
0000: 
<= Recv data, 16 bytes (0x10)
0000: File not found."
File not found."== Info: Connection #0 to host altd.embarcadero.com left intact

This is also the reason that WayBack does not want to archive that link, but it can be archived at [Archive.ishttps://altd.embarcadero.com/.

Luckily, a Google search for site:altd.embarcadero.com revealed there is a non-installer file short enough (~72 kibibytes) for Uptime robot to check, so it now verifies it can access these:

–jeroen

Read the rest of this entry »

Posted in *nix, *nix-tools, cURL, Encryption, HTTPS/TLS security, Monitoring, Power User, Security, Uptimerobot, wget | Leave a Comment »

How to tell if your site is served via CloudFlare | Igor’s Blog

Posted by jpluimers on 2018/08/10

Based on [Archive.isHow to tell if your site is served via CloudFlare | Igor’s Blog, I’ve changed the script a little bit.

I’ve tested it with one of the domains from the Cloudbleed list (a pretty OK indication the site is using cloudflare) and a the example.org site that does not:

# curl -sI https://feedly.com | grep "Server\|__cfduid\|CF-RAY"
Set-Cookie: __cfduid=d779ee6e244349cf06e2707771a9185e21492589239; expires=Thu, 19-Apr-18 08:07:19 GMT; path=/; domain=.feedly.com; HttpOnly
Server: cloudflare-nginx
CF-RAY: 351e5e9af8971497-AMS
# curl -sI https://example.org | grep "Server\|__cfduid\|CF-RAY"
Server: ECS (ewr/15BD)

Domain Source: pirate/sites-using-cloudflare: Archived list of domains using Cloudflare DNS at the time of the CloudBleed announcement

–jeroen

via: [WayBack] https://www.igorkromin.net/index.php/2017/04/18/how-to-tell-if-your-site-is-served-via-cloudflare/ – Joe C. Hecht – Google+

 

Posted in *nix, *nix-tools, cURL, Power User | Leave a Comment »

 
%d bloggers like this: