CMD Architecture

You should start with x64
and fallback from there…

::good for AMD64 or IA64.
set NODE=.\binary\x64\node.exe

::fallback for old machines
if ["x86"] == ["%PROCESSOR_ARCHITECTURE%"] (
  set NODE=.\binary\x32\node.exe

JavaScript Ninja – How To Tell If You’re Running Code From Within A SandBox’ed IFRAME

How to know if you are running from within an iframe,
which has set to be sandbox’ed – without allowing scripts execution?

essentially you try and fail (an error will still be presented in the console)
to modify a close-related variable, which requires an injection/execute of
a script in the local-scope (self).

the following code, will give you an explicit
yes/no answer, for the script-sandbox state you are currently running from.

It is useful if you’re running a JavaScript code, that
is originated from a Chrome-extension,
with "all_frames": true, set in the content_scripts.

Continue reading

cURL – Latest Chromium Downloader

echo off
::stuff you can modify.
::                          http|https
set PROTOCOL=http
::                          snapshots|continuous
set BRANCH=snapshots
::                          Android|Arm|Linux|LinuxGit|LinuxGit_x64|Linux_ARM_Cross-Compile|Linux_ChromiumOS|Linux_ChromiumOS_Full|Linux_x64|Mac|MacGit|Win|WinGit|Win_x64|chromium-full-linux-chromeos
set OS=Win_x64
::                          mini_installer.exe|REVISIONS|||||||remoting-host.msi||||changelog.xml|
set FILE=mini_installer.exe

::stuff you should keep as is.

set COMMAND_CURL_FORVERSION=curl.exe --silent --http2 --ipv4 --anyauth --insecure --location-trusted --ssl-allow-beast --ssl-no-revoke --url "%URL_LASTCHANGE%"

for /f "tokens=*" %%a in ('call %COMMAND_CURL_FORVERSION% 2^>^&1') do (set VERSION=%%a)
::error handling
if ["%VERSION%"] == [""] ( goto NOVERSION )

echo Got Latest-Version: ^>%VERSION%^< ^[Branch:%BRANCH%/OS:%OS%^]

::you should enable one-of-your-prefered downloaders.

::you should not reach here, unless
::you've forgot to enable one of the "downloader lines" (above)...

  echo ERROR: could not get the latest version...
  goto EXIT

  echo ERROR: please enable one of the downloader lines..
  goto EXIT

  echo Start Download using cURL...
  call curl.exe --verbose --http2 --ipv4 --ignore-content-length ^
                --anyauth --insecure --location-trusted          ^
                --ssl-allow-beast --ssl-no-revoke --tcp-fastopen ^
                --tcp-nodelay --use-ascii --url "%URL_DOWNLOAD%"
  goto EXIT

  echo Start Download using wGET...
  call wget.exe --directory-prefix="." --debug --user-agent="Mozilla/5.0 Chrome" --continue ^
                --server-response --no-check-certificate --secure-protocol=auto  "%URL_DOWNLOAD%"
  goto EXIT

  echo Start Download using Aria2C...
  call aria2c.exe --allow-overwrite=true         --auto-file-renaming=false         --check-certificate=false        ^
                  --check-integrity=false        --connect-timeout=120              --console-log-level=notice       ^
                  --continue=true                --dir="."                          --disable-ipv6=true              ^
                  --enable-http-keep-alive=true  --enable-http-pipelining=true      --file-allocation=prealloc       ^
                  --http-auth-challenge=false    --human-readable=true              --max-concurrent-downloads=16    ^
                  --max-connection-per-server=16 --max-tries=3                      --min-split-size=1M              ^
                  --retry-wait=1                 --rpc-secure=false                 --split=8                        ^
                  --timeout=120                  --user-agent="Mozilla/5.0 Chrome"  "%URL_DOWNLOAD%"
  goto EXIT

  echo Start Download using OrbitDownloader...
  call "C:\Program Files (x86)\Orbitdownloader\orbitdm.exe" "%URL_DOWNLOAD%"
  goto EXIT

  echo Done.

::   - "http"           - http protocol is faster to connect and download with OrbitDownloader and wGet, it also means your PC won't preform certificate exchange with the remote machine.
::   - "snapshots"      - snapshots is newer, "continuous" is more stable (but might be very old).
::   - "OS" and "FILE"  - are what you want to download
:: snapshots    - newest (unstable) newest code-changes - passed unit-tests + compilation.
:: continuous   - old    (stable)                       - passed unit-tests + compilation + test-suits.
:: ------------------------------------------------------------------------------------------------------
::   branch   |  branch description           |  version-based build
:: ___________|_______________________________|_____________________________________________________________________________________________
::   Win_x64  |  Chromium Installer (64-bit)  |{version}/mini_installer.exe  
::   Win_x64  |  Chromium Package (64-bit)    |{version}/    
::   Win      |  Chromium Installer (32-bit)  |{version}/mini_installer.exe
::   Win      |  Chromium Package (64-bit)    |{version}/    

Download tools..

Solved: Lenovo Power Manager (PWMUI.EXE) Slow-Start Workaround

Owning a Lenovo/IBM computer, meaning usually having to deal with Lenovo’s quirky products,
and badly-designed/badly-programmed UI/core components. It’s a fact, so don’t bitch about it!

The semi-working workaround is to do a “warm-up”,
i. e. – starting "C:\Program Files (x86)\ThinkPad\Utilities\PWMUI.EXE" (as admin..)
and quickly force-closing it using task-manager, followed by quickly (needs to be less than 20 seconds later) restarting the PWMUI.EXE application again, and this time it will load-up in few seconds.

The cause is (…probably) for the shittiness of Lenovo’s power manager (PWMUI.exe) UI is that it was written using a very old .Net framework some of the shittiest code-library known to human race called Presentation Foundation, this code swap so many hands that older versions (2.0/3.0 -which PWMUI uses), got broken by newer versions, specifically – the resource-loading management, essentially switching from a proactive attitude to a “on-demand/on-the-fly” way, that requires getting an event signaling *stuff*… old code is getting into a “blocking mode”, and waits (A LOT!) before falling back to legacy back-end which load the resources the way versions 2.0/3.0 used to..
Also, when closed, the application *Presentation Foundation will unload the resources, so you’ll need to repeat the workaround each time you’ll want to run the power-manager application.

I’ve reversed-engineered the source of the power-manager application
which is quite and easy feat, due to the fact was written practically with minimal code, using the Presentation Foundation template of .Net.
It was probably written as a Visual-Basic.Net, I’ve converted it to C# which is nicer ;)
and the source (if you wish to help debug it, or even compile it using a more recent version of .Net, which will surely clear some issues) is available here:


CSS3 :focus-within Is A One Step Closer Towards A Parent-CSS

You might have hears of the newest (well.. to date) CSS pseudo-class,
the :focus-within.

While you can create style-effect, depending on the focus-state of either of the node’s descendant,
(yes, this also includes descendants of the shadow trees :]),
this is essentially a respond to a much needed (but still not fully here… yet…), from child to parent rule-set.

You can use it on latest Chromium, Chrome beta (v60+) and pretty much every Mozilla-Firefox from version 50+ (Firefox got there before Chrome? whaaaaat?)

A Non-Blocking Font-Download – Pure CSS, No JavaScript!

Custom font-face are fun,
..I guess…

But when not allowing several alternatives of font-families (sans, sans-serif, etc.. are considered a “low-priority” but perfectly fine alternatives!), not using the “local(…)” syntax to allow sorter times, by loading a locally installed fonts from the OS, and pretty much limiting the page to use an external font-file,
The time the page will take until first-paint event might take few SECONDS(!)

There are few tricks that the browser will try to apply,
just to avoid all this waiting though..

You can avoid that “blocking” time,
by explicitly allow the browser to display the page as is,
and “swap” (official term..) ..which means update the page
once the external font-file will be downloaded.

You can do this (for web-developers) by adding font-display:swap; to your @font-face loading block part in your page’s CSS file.

@font-face {
  font-family: ExampleFont;
  src: url(/path/to/fonts/examplefont.woff) format('woff'),
       url(/path/to/fonts/examplefont.eot) format('eot');
  font-weight: 400;
  font-style: normal;
  font-display: swap;

And if you’re already *there*- you might be able
to tweak things to load the page much faster,
for example when using a common font-family.

<style type="text/css" media="all">
  font-family:  'Tahoma';
  font-weight:  400;
  font-style:   normal;
  font-display: swap;
  src: local('Tahoma')
      ,url(fonts/tahoma.eot?#iefix)   format('embedded-opentype')
      ,url(fonts/tahoma.woff)         format('woff')
      ,url(fonts/tahoma.ttf)          format('truetype')

/* normalise inheritage of font-family and stuff (for textarea and such..)
  font:                   inherit;
  font-family:            inherit;
  font-style:             inherit;
  font-size:              inherit;
  font-weight:            inherit;

  font-display:           inherit;
  font-size-adjust:       inherit;
  font-stretch:           inherit;

/* the "actual magic" */
  font-family: 'Tahoma', sans-serif;

/* are we having fun yet?? */


Chrome With WebM Version 9 (VP9)

The WebM project, is closely integrated with Chromium,
and finally (Chrome beta v60+) you can properly display webm that encoded with VP9.

VP9 supports the full range of web and mobile use cases from low bitrate compression to high-quality ultra-HD, with additional support for 10/12-bit encoding and HDR.

VP9 can reduce video bit rates by as much as 50% compared with other known codecs. It is supported for adaptive streaming and is used by YouTube as well as other leading web video providers.

VP9 decoding is supported on over 2 billion end points including Chrome, Opera, Edge, Firefox and Android devices, as well as millions of smart TVs.

To encode, simply grab any media-file, and the latest FFMPEG,
read through this helpful page:

here is an example (for Windows)

ffmpeg.exe  -i "input.mp4" -c:v "libvpx-vp9" -b:v 2M -c:a "libopus" -pass 1 -f webm   nul           &&  ^
ffmpeg.exe  -i "input.mp4" -c:v "libvpx-vp9" -b:v 2M -c:a "libopus" -pass 2           "output.webm"

Here are a bit more advanced encoding options, using ffmpeg,
from the Google-maintained documentation for WebM: (for content-creators).
or (older) from here:
And– you can make things even easier,
by using (with the VP9 modification above) the code from this article:
iCompile – FFmpeg – Everything To WebM – Drag&Drop!.

When to use the new encoding?
you are encouraged to provide an alternative (VP8), just for good measures..
but also an additional VP9, properly-encoded, video-source,
– for example:

<video poster="movie.jpg" controls>
  <source src="movie.webm" type='video/webm;codecs="vp8,vorbis"'>
  <source src="movie.webm" type='video/webm;codecs="vp09.00.10.08,opus"'>

  <source src="movie.ogv" type='video/ogg;codecs="theora,vorbis"'>
  <source src="movie.mp4" type='video/mp4;codecs="avc1.4D401E,mp4a.40.2"'>
  <p>Fallback message. No support for HTML5 video tag.</p>

You can be more specific about the codecs-string,
but it is totally up to you..
Here some additional examples, btw. did you know you can test the support using the JS’ MediaSource class? ha!

Chromium: Snapshot vs. Continuous

OK, once and for all N00bs ;]

…every time there are any substantial code-changes (about twice a day),
an automated build of Chromium is created.

Once the compilation process succeeds (minimal unit-tests mostly..),
the build is then copied to the snapshot tree (which may be downloaded by you freely from here: but keep in mind, it is quite buggy..)

The build is than ran-through a quite huge (python managed) test-suit…

Continue reading

NodeJS Master – Simple Request

Here is how to do a simple request.
– no dependencies/external modules required.

The example shows how to do a HEAD request,
and get all HTTP-headers. You can easily modify it to use GET or POST.

Only thing you need is node.exe, without any external stuff (about 18MB),
which you can download here:

function head(url, onheaders, ondone, onerror){
  onheaders  = "function" === typeof onheaders  ? onheaders : function(){}; //normalise
  ondone     = "function" === typeof ondone     ? ondone    : function(){}; //normalise
  onerror    = "function" === typeof onerror    ? onerror   : function(){}; //normalise

  var data = [];

  url = require('url').parse(url);
  const request = require("http").request({
                                     protocol: url.protocol               // "http:"
                                    ,auth:     url.auth                   // "username:password"
                                    ,hostname: url.hostname               // ""
                                    ,port:     url.port                   // 80
                                    ,path:     url.path                   // "/"
                                    ,family:   4                          // IPv4
                                    ,method:   "HEAD"
                                    ,headers:  {"Connection":     "Close"
                                               ,"Cache-Control":  "no-cache"
                                               ,"User-Agent":     "Mozilla/5.0 Chrome"
                                               ,"Referer":        ""
                                               ,"DNT":            "1"
                                               ,"X-Hello":        "Goodbye"  
                                               ,"Accept":          "*/*"
                                    ,agent:    undefined                  //use http.globalAgent for this host and port.
                                    ,timeout:  5 * 1000                   //5 seconds
  request.setSocketKeepAlive(false);                                      //make sure to return right away (single connection mode).
  request.on("error",    (err)      => {  onerror(request,err);          });
  request.on("response", (response) => {  onheaders(request,response);    //headers
                                          response.setEncoding("utf8");   //collect response-body

head("" ,function whenheaders(request, response){
                                 console.log("HTTP-Headers Recived",  JSON.stringify(response.headers)  );
                              ,function whendone(request, response, data){      //NOTE: won't be called when using HEAD
                                 console.log("Response Body Recived", data);
                              ,function whenerror(request, error){
                                 console.log("Request Error", error);

You can download the full example from github too:

Parsing URL – Scheme

│                                            href                                             │
│ protocol │  │        auth         │        host         │           path            │ hash  │
│          │  │                     ├──────────────┬──────┼──────────┬────────────────┤       │
│          │  │                     │   hostname   │ port │ pathname │     search     │       │
│          │  │                     │              │      │          ├─┬──────────────┤       │
│          │  │                     │              │      │          │ │    query     │       │
"  https:   //    user   :   pass   @ : 8080   /p/a/t/h  ?  query=string   #hash "
│          │  │          │          │   hostname   │ port │          │                │       │
│          │  │          │          ├──────────────┴──────┤          │                │       │
│ protocol │  │ username │ password │        host         │          │                │       │
├──────────┴──┼──────────┴──────────┼─────────────────────┤          │                │       │
│   origin    │                     │       origin        │ pathname │     search     │ hash  │
│                                            href                                             │
(all spaces in the "" line should be ignored -- they are purely for formatting)

JavaScript Ninja – Unicode – String.prototype.fromCharCode Fix To Allow More Than-2 Bytes Unicode Chars Directly (A.K.A Unicode-Surrogate-Pair)

String.from_char_code__with_unicode_surrogate_pair_support = function(char_code){ "use strict";
  var surrogate1, surrogate2;

  if(char_code <= 0xFFFF) return String.fromCharCode(char_code);

  char_code  = char_code - 0x10000;
  surrogate1 = 0xD800 + (char_code >> 10);
  surrogate2 = 0xDC00 + (char_code &  0x3FF);
  return String.fromCharCode([surrogate1, surrogate2]);

A seamless solution is to secretly override (one time..)
fromCharCode method.

/* the magic of scope :] */
(function(){ "use strict";
  var original_reference = String.fromCharCode;
  function fromCharCodeFixed(code){
    if(code <= 0xFFFF) return original_reference(code);
    code = code - 0x10000;
    return original_reference(0xD800 + (code >> 10)
                             ,0xDC00 + (code &  0x3FF)

  String.fromCharCode = fromCharCodeFixed;                        //override
  String.fromCharCode.original_reference = original_reference;    //allow access to original reference (optional)

Do it only once. although it can not really harm anything
other than create a recursive infinite-loop to itself :]
You can check the existence of original_reference to make sure not to run the block again… :]]

AutoIt SubDomains

Using, and sniffing on local-machine w/ it installed on.
Continue reading