My Feed Title Articles on 2020-10-12T16:02:00+02:00 Erreur32 Vidéo - Blanchiment bancaire ? [Gaël Giraud] 2020-10-12T16:02:00+02:00 2020-10-12T16:02:00+02:00

Chaos économique, blanchiment bancaire ?

Avec [Gaël Giraud] By







▶️ SITE :






Admin tools 2020-05-25T22:39:52+02:00 2020-05-25T22:39:52+02:00

Admin Tools

Code is life 2020-05-25T17:31:40+02:00 2020-05-25T17:31:40+02:00 ]]> fail2web 2020-05-23T00:23:23+02:00 2020-05-23T00:23:23+02:00

Monitoring your fail2ban ban web GUI.

Source github : and

While in theory it would be nice to have every network service behind a LAN/certificate that isn't always possible. Many protocols don't have certificate authentication, and even if they do maybe your client doesn't support it. So you are left with password authentication, but what happens when someone uses a 4 char password... and an attacker is able to guess 1000 times every minute? Seems like a complicated problem, but not with fail2ban! The following description is lifted from the fail2ban documentation.

However, once I installed fail2ban I found myself locking myself out of my servers all the time! I would setup a SIP client to use the wrong password, and would spend way to much time debugging what was wrong! With fail2web, that is all a thing of the past! fail2web is a mobile first GUI to fail2ban, that allows you to view who is currently banned, test regexes and view graphs on past bans.

You need have fail2rest and the necessary libraries.

Even though this tutorial is written for Ubuntu/Debian, it should work on any host that has Golang and fail2ban (All Unixes). You might have to deviate from this tutorial, but it should covert most of it.

First we need to install fail2rest, the daemon that communicates with fail2ban. The backend requires the Go programming language, and git to download it. If you have never used Go before you can follow this verbatim, adjust as needed if you already have a Gopath set.

  • sudo apt-get install golang git gcc

  • go get

  • cd $GOPATH/src/

  • sudo -E go run *.go Check the startup script for debian

If everything worked this program should just run forever! We will update it to run as a service later, but make sure it is working first. Run wget -qO- -- "localhost:5000/global/ping" if that returns "pong" you have a running fail2rest instance!

Next we are going to install fail2web in /var/www/fail2web, later we will access this via apache

  • git clone --depth=1 /var/www/fail2web

Congrats, you are almost done! You now have all the moving parts, all that is left is to serve it via Apache

Install Apache and put it behind a password
  • sudo apt-get install apache2 apache2-utils

  • sudo htpasswd -c /var/www/htpasswd YOUR_USERNAME

  • sudo a2enmod proxy proxy_ajp proxy_http rewrite deflate headers proxy_balancer proxy_connect proxy_html
Create your fail2web config

Then with your text editor of choice create /etc/apache2/sites-enabled/fail2web.conf with the following content. Make sure to replace

     <VirtualHost *:80>  
  ServerName ##CHANGE THIS  
  DocumentRoot /var/www/fail2web/web  

  <Location />  
      AuthType Basic  
      AuthName "Restricted"  
      AuthBasicProvider file  
      AuthUserFile /var/www/htpasswd  
      Require valid-user  


 ProxyPass /api  

Restart Apache!

fail2web should now be accessible via the ServerName you chose above

# kFreeBSD do not accept scripts as interpreters, using #!/bin/sh and sourcing.
if [ true != "$INIT_D_SCRIPT_SOURCED" ] ; then
    set "$0" "$@"; INIT_D_SCRIPT_SOURCED=true . /lib/init/init-d-script
# Provides:          fail2rest
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Should-Start:      fail2ban
# Should-Stop:       fail2ban
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: fail2rest initscript
# Description:       fail2rest is a small
#                    REST server that aims
#                    to allow full administration
#                    of a fail2ban server via HTTP



#FIXME path to your fail2rest binary

# Author: Sean DuBois <>
DESC="fail2ban REST server"

case "$1" in
        echo "Starting $NAME ..."
        if [ -f "$WORKDIR/$" ]
            echo "Already running according to $WORKDIR/$"
            exit 1
        cd "$WORKDIR"
  export GOPATH="$GOPATH"
  export PATH="PATH=/usr/sbin:/usr/bin:/sbin:/bin:$GOPATH/bin"
        /bin/su -m -l root -c " $DAEMON --config $CONFIG" > "$WORKDIR/$NAME.log" 2>&1 &

        echo $PID > "$WORKDIR/$"
        echo "Started with pid $PID - Logging to $WORKDIR/$NAME.log" && exit 0
        echo "Stopping $NAME ..."
        if [ ! -f "$WORKDIR/$" ]
            echo "Already stopped!"
            exit 1
        PID=`cat "$WORKDIR/$"`
        kill $PID
        rm -f "$WORKDIR/$"
        echo "stopped $NAME" && exit 0
        $0 stop
        sleep 1
        $0 start
        if [ -f "$WORKDIR/$" ]
            PID=`cat "$WORKDIR/$"`
            if [ "$(/bin/ps --no-headers -p $PID)" ]
                echo "$NAME is running (pid : $PID)" && exit 0
                echo "Pid $PID found in $WORKDIR/$, but not running." && exit 1
            echo "$NAME is NOT running" && exit 1
      echo "Usage: /etc/init.d/$NAME {start|stop|restart|status}" && exit 1

exit 0
Geoip Update Database 2020-05-18T20:23:00+02:00 2020-05-18T20:23:00+02:00

How to install the “geoipupdate” software from MaxMind.

Create an account on Maxmind for GeoLite2. Go to:

Download and extract the appropriate tarball for your system.

You will end up with a directory named something like geoipupdate_4.0.0_linux_amd64 depending on the version and architecture.

Copy geoipupdate to where you want it to live. To install it to /usr/local/bin/geoipupdate, run the equivalent of sudo cp geoipupdate_4.0.0_linux_amd64/geoipupdate /usr/local/bin.

geoipupdate looks for the config file /usr/local/etc/GeoIP.conf by default.

add-apt-repository ppa:maxmind/ppa
apt update ;  apt upgrade ;  apt install geoipupdate

Before you can run the geoipupdate script, you’ll have to add our own credentials on /etc/GeoIP.conf

You should have already file called GeoIP.conf, if not you can copy paste:

 $ nano /etc/GeoIP.conf
# Please see for instructions
# on setting up geoipupdate, including information on how to download a
# pre-filled GeoIP.conf file.

# Enter your account ID and license key below. These are available from
# If you are only using free
# GeoLite databases, do not uncomment these lines.
 AccountID 0
 LicenseKey 000000000000
 EditionIDs GeoLite2-ASN GeoLite2-City GeoLite2-Country

More help here :

run the commande :

$ geoipupdate -v
 geoipupdate 3.1.1
 Opened License file /etc/GeoIP.conf
 AccountID 154662
 LicenseKey g6oO...
 Insert edition_id GeoLite2-ASN
 Insert edition_id GeoLite2-City
 Insert edition_id GeoLite2-Country  
 Read in license key /etc/GeoIP.conf
 Number of edition IDs 3
 No new updates available
 No new updates available
 No new updates available

( You need to file AccountID, LicenseKey, EditionIDs to run successfully the script.)

We are finally ready to update our MaxMind GeoLite database!

If that command succeeds, you can now execute the geoipupdate script below.

   mkdir /var/www/app/cache/ip_data
   sudo -u daemon geoipupdate -f /etc/GeoIP.conf -d /var/www/app/cache/ip_data -v
   chmod -R 777 /var/www/html/app/cache/ip_data

Because we added the -v option at the end, the script execution will respond with some data about the process and the results.

HSTR 2020-05-13T21:57:00+02:00 2020-05-13T21:57:00+02:00

Easily view, navigate and search your command history with shell history suggest box for bash and zsh.

Install HSTR from PPA. Add my PPA, trust GPG key and install HSTR:


Are you looking for a command that you used recently? Do you want to avoid the need to write long commands over and over again? Are you looking for a tool that is able to manage your favorite commands?

HSTR (HiSToRy) is a command line utility that brings improved bash/zsh command completion from the history. It aims to make completion easier and more efficient than Ctrl-r.

HSTR can also manage your command history (for instance you can remove commands that are obsolete or contain a piece of sensitive information) or bookmark your favorite commands.

add PPA to APT sources:

sudo echo -e "\ndeb stretch main" >> /etc/apt/sources.list

import PPA's GPG key

wget -qO - | sudo apt-key add -

update sources

sudo apt update

Install MindForger

sudo apt install hstr


Configure HSTR just by running:


hstr --show-configuration >> ~/.bashrc


hstr --show-configuration >> ~/.zshrc

For detailed HSTR configuration documentation please refer to Configuration.

Nano Shortcut 2020-05-11T15:43:00+02:00 2020-05-11T15:43:00+02:00

Nano's shortcuts

The editor's keystrokes and their functions

File handling

Ctrl+S   Save current file
Ctrl+O   Offer to write file ("Save as")
Ctrl+R   Insert a file into current one
Ctrl+X   Close buffer, exit from nano


Ctrl+K   Cut current line into cutbuffer
Alt+6    Copy current line into cutbuffer
Ctrl+U   Paste contents of cutbuffer
Alt+T    Cut until end of buffer
Ctrl+]   Complete current word
Alt+3    Comment/uncomment line/region
Alt+U    Undo last action
Alt+E    Redo last undone action

Search and replace

Ctrl+Q  Start backward search
Ctrl+W  Start forward search
Alt+Q   Find next occurrence backward
Alt+W   Find next occurrence forward
Alt+R   Start a replacing session


Ctrl+H          Delete character before cursor      
Ctrl+D          Delete character under cursor
Ctrl+Shift+Del  Delete word to the left
Ctrl+Del        Delete word to the right
Alt+Del         Delete current line


Ctrl+T   Run a spell check
Ctrl+J   Justify paragraph or region
Alt+J    Justify entire buffer
Alt+B    Run a syntax check
Alt+F    Run a formatter/fixer/arranger
Alt+:    Start/stop recording of macro
Alt+;    Replay macro

Moving around

Ctrl+B     One character backward
Ctrl+F     One character forward
Ctrl+←   One word backward
Ctrl+→   One word forward
Ctrl+A   To start of line
Ctrl+E   To end of line
Ctrl+P   One line up
Ctrl+N   One line down
Ctrl+↑   To previous block
Ctrl+↓   To next block
Ctrl+Y   One page up
Ctrl+V   One page down
Alt+\    To top of buffer
Alt+/    To end of buffer

Special movement

Alt+G   Go to specified line
Alt+]   Go to complementary bracket
Alt+↑   Scroll viewport up
Alt+↓   Scroll viewport down
Alt+<   Switch to preceding buffer
Alt+>   Switch to succeeding buffer


Ctrl+C  Report cursor position
Alt+D   Report word/line/char count
Ctrl+G  Display help text


Alt+A         Turn the mark on/off
Tab Indent marked region
Shift+Tab    Unindent marked region
Alt+N        Turn line numbers on/off
Alt+P        Turn visible whitespace on/off
Alt+V        Enter next keystroke verbatim
Ctrl+L       Refresh the screen
Ctrl+Z       Suspend nano
vnstat 2020-05-02T15:55:00+02:00 2020-05-02T15:55:00+02:00


Console moniteur de traffic et générateur graphique d'images (.png)

Installation de vnstat

Il existe deux varientes de vnstat, un mode console seulement et un mode avec géneration d'images.

Pour vnstat simple:

$ apt-get install vnstat 

Et vnstati pour générer les images:

apt-get install vnstati 

Generation des images

Pour génerer les images il suffit de placer un crontab pour chcune d'entre elles:

$  crontab -e

5 * * * *      vnstati -s -i eth0 -o  PATH/vnstat/summary.png >/dev/null 2>&1
2 * * * *      vnstati -h -i eth0 -o  PATH/vnstat/hourly.png      >/dev/null 2>&1
2 * * * *      vnstati -d -i eth0 -o  PATH/vnstat/daily.png        >/dev/null 2>&1
2 * * * *      vnstati -t -i eth0 -o   PATH/vnstat/top10.png      >/dev/null 2>&1
2 * * * *      vnstati -m -i eth0 -o PATH/vnstat/monthly.png   >/dev/null 2>&1
48 * * * *      /usr/bin/vnstat -u   >/dev/null 2>&1

Console mode

$ vnstat 1.18 by Teemu Toivola <tst at iki dot fi>

         -q,  --query          query database
         -h,  --hours          show hours
         -d,  --days           show days
         -m,  --months         show months
         -w,  --weeks          show weeks
         -t,  --top10          show top 10 days
         -s,  --short          use short output
         -u,  --update         update database
         -i,  --iface          select interface (default: eth0)
         -?,  --help           short help
         -v,  --version        show version
         -tr, --traffic        calculate traffic
         -ru, --rateunit       swap configured rate unit
         -l,  --live           show transfer rate in real time

See also "--longhelp" for complete options list and "man vnstat".
$ vnstat
                    rx      /      tx      /     total    /   estimated
    avril '20     440,80 GiB  /  699,11 GiB  /    1,11 TiB
       mai '20      2,14 GiB  /   35,65 GiB  /   37,79 GiB  /  703,76 GiB
     yesterday    951,90 MiB  /   20,86 GiB  /   21,79 GiB
         today      1,21 GiB  /   14,79 GiB  /   16,00 GiB  /   22,66 GiB

$ vnstat -t
eth0  /  top 10

    #      day          rx      |     tx      |    total    |   avg. rate
    1   27/03/2020   151,72 GiB |  264,26 GiB |  415,98 GiB |   41,36 Mbit/s
    2   01/04/2020    88,55 GiB |  260,02 GiB |  348,56 GiB |   34,65 Mbit/s
    3   07/11/2018   180,50 GiB |   49,25 GiB |  229,76 GiB |   22,84 Mbit/s
    4   01/01/2020   117,96 GiB |   99,26 GiB |  217,22 GiB |   21,60 Mbit/s
    5   28/11/2018    88,03 GiB |  126,89 GiB |  214,92 GiB |   21,37 Mbit/s
    6   25/12/2019   121,12 GiB |   80,72 GiB |  201,85 GiB |   20,07 Mbit/s
    7   13/11/2019   115,42 GiB |   85,39 GiB |  200,81 GiB |   19,96 Mbit/s
    8   02/01/2019   129,23 GiB |   55,08 GiB |  184,30 GiB |   18,32 Mbit/s
    9   07/08/2019   123,79 GiB |   58,85 GiB |  182,64 GiB |   18,16 Mbit/s
   10   21/09/2017   153,26 GiB |   26,35 GiB |  179,61 GiB |   17,86 Mbit/s
$  vnstat -h
 eth0                                                                     17:02
  ^               t
  |               t
  |            t  t
  |            t  t
  |            t  t
  |            t  t     t
  |         t  t  t     t     t           t                 t
  |   t     t  t  t  t  t     t  t        t                 t     t
  |   t     t  t  t  t  t     t  t        t  t     t        t     t
  |   t     t  t  t  t  t  t  t  t  t  t  t  t  t  t  t  t  t  t  t  t
  |  18 19 20 21 22 23 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17

 h  rx (MiB)   tx (MiB)  ][  h  rx (MiB)   tx (MiB)  ][  h  rx (MiB)   tx (MiB)
18       33,3    1 080 ][ 02       41,2    1 508 ][ 10       17,7      359,9
19       11,9       85,7 ][ 03       27,2    1 064 ][ 11       22,7      645,7
20       42,7    1 531 ][ 04      247,0      520,0 ][ 12      161,3    1 656
21       78,9    3 092 ][ 05       16,0      535,6 ][ 13      110,7      583,0
22       85,2    3 514 ][ 06       41,8    1 476 ][ 14       90,5    1 127
23       31,6    1 133 ][ 07       27,1    1 048 ][ 15      175,9      685,0
00       45,2    1 787 ][ 08       18,2      535,2 ][ 16      164,8      120,9
01       15,6      531,6 ][ 09       18,9      963,2 ][ 17        1,2       13,2


vnStat is a console-based network traffic monitor that uses the network interface statistics provided by the kernel as information source. This means that vnStat won't actually be sniffing any traffic and also ensures light use of system resources. vnStat had an Initial public release in 23-Sep-2002 (version 1.0) by Teemu Toivola.

On 8 March 2004 its webpage moved to and a man page was included.

On 4 November 2006 was included in Debian for Testing Watch [4] and on 17 November 2006 was removed and next day was accepted 1.4-4 version.

On 20 February 2010 was accepted 1.10-0.1 version in Debian.[5] Nowadays Debian keeps a full history[6] about vnstat by using a Rich Site Summary.

On 26 April 2012 was included in Ubuntu 12.04 Precise Pangolin [7]

On 16 February 2017 a 1.17 version was released.[8]


Windows 2020-04-27T11:01:00+02:00 2020-04-27T11:01:00+02:00

Kali / Windows

Comment installer Kali sur votre environnement Windows.

Activer le sous-système Windows pour Linux

Lancez PowerShell en tant qu’administrateur . Activez ensuite cette fonctionnalité optionnelle de Windows en entrant la commande suivante :

    Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Une fois que vous aurez validé, une barre de progression s’affichera à l’écran pendant quelques secondes.

Attention il vous sera demandé de redémarrer l’ordinateur.

Entrez la touche Y au clavier et validez pour procéder au redémarrage.

Vous pouvez aller sur le store Windows : ,

ou alors depuis le menu démarrer.

et voilà kali installé :)


Category 2020-04-23T13:55:00+02:00 2020-04-23T13:55:00+02:00

Articles by CATEGORY

Bashtop 2020-04-23T10:34:00+02:00 2020-04-23T10:34:00+02:00


Usage: Linux resource monitor
Language: Bash


Resource monitor that shows usage and stats for processor, memory, disks, network and processes.


  • Easy to use, with a game inspired menu system.
  • Fast and responsive UI with UP, DOWN keys process selection.
  • Function for showing detailed stats for selected process.
  • Ability to filter processes.
  • Easy switching between sorting options.
  • Send SIGTERM, SIGKILL, SIGINT to selected process.
  • UI menu for changing all config file options.
  • Auto scaling graph for network usage.
  • Shows message in menu if new version is available


Bashtop now has theme support and a function to download missing local themes from repository.

See themes folder for available themes.

Let me know if you want to contribute with new themes.


Currently rewriting to use python3 psutil for data collection instead of linux specific tools. This will add python 3 and psutil as dependencies, but will make bashtop cross platform compatible.

Please let me know if there is an interest in keeping current version without python dependencies alive.


Should work on most modern linux distributions with a truecolor capable terminal.


bash (v4.4 or later) Script functionality will most probably break with earlier versions.
Bash version 5 is higly recommended to make use of $EPOCHREALTIME variable instead of alot of external date command calls.

(Optional) curl (v7.16.2 or later) Needed if you want messages about updates and the ability to download themes.


Main UI showing details for a selected process.
Screenshot 1

Main menu.
Screenshot 2

Options menu.
Screenshot 3


Copy or link "bashtop" into PATH, or just run from cloned directory...

Also available in the AUR as bashtop-git

Also available for debian/ubuntu from Azlux's repository


All options changeable from within UI. Config files stored in "$HOME/.config/bashtop" folder

bashtop.cfg: (auto generated if not found)

#? Config file for bashtop v. 0.8.0

#* Color theme, looks for a .theme file in "$HOME/.config/bashtop/themes", "Default" for builtin default theme

#* Update time in milliseconds, increases automatically if set below internal loops processing time, recommended 2000 ms or above for better sample times for graphs

#* Processes sorting, "pid" "program" "arguments" "threads" "user" "memory" "cpu lazy" "cpu responsive"
#* "cpu lazy" upates sorting over time, "cpu responsive" updates sorting directly at a cpu usage cost
proc_sorting="cpu lazy"

#* Reverse sorting order, "true" or "false"

#* Check cpu temperature, only works if "sensors" command is available and have values for "Package" and "Core"

#* Draw a clock at top of screen, formatting according to strftime, empty string to disable

#* Update main ui when menus are showing, set this to false if the menus is flickering too much for comfort

#* Custom cpu model name, empty string to disable

#* Enable error logging to "$HOME/.config/bashtop/error.log", "true" or "false"

Command line options: (not yet implemented)

USAGE: bashtop


  • [x] TODO Add options to change colors for text, graphs and meters.
  • [ ] TODO Add options for resizing all boxes.
  • [ ] TODO Add command line argument parsing.
  • [ ] TODO Miscellaneous optimizations and code cleanup.
  • [ ] TODO Add more commenting where it's sparse.


Apache License 2.0

HTML code page 2020-04-11T13:39:00+02:00 2020-04-11T13:39:00+02:00

Codes de réponse HTTP

Les codes de statut de réponse HTTP indiquent si une requête HTTP a été exécutée avec succès ou non.

Les réponses sont regroupées en cinq classes : les réponses informatives, les réponses de succès, les redirections, les erreurs du client et les erreurs du serveur.

100 - Continue - Tells the client that the first part of the request has been received and that it should continue with the rest of the request or ignore if the request has been fulfilled.

101 - Switching Protocols - Tells the client that the server will switch protocols to that specified in the Upgrade message header field during the current connection.

200 - OK - The request sent by the client was successful.

201 - Created - The request was successful and a new resource was created.

202 - Accepted - The request has been accepted for processing, but has not yet been processed.

203 - Non-Authoritative Information - The returned meta information in the entity-header is not the definitive set as available from the origin server.

204 - No Content - The request was successful but does not require the return of an entity-body.

205 - Reset Content - The request was successful but the User-Agent should reset the document view that caused the request.

206 - Partial Content - The partial GET request has been successfull.

300 - Multiple Choices - The requested resource has multiple possibilities, each with different locations.

301 - Moved Permanently - The resource has permanently moved to a different URI.

302 - Found - The requested resource has been found under a different URI but the client should continue to use the original URI.

303 - See Other - The requested response is at a different URI and should be accessed using a GET command at the given URI.

304 - Not Modified - The resource has not been modified since the last request.

305 - Use Proxy - The requested resource can only be accessed through the proxy specified in the location field.

306 - No Longer Used - Reserved for future use.

307 - Temporary Redirect - The resource has temporarily been moved to a different URI. The client should use the original URI to access the resource in future as the URI may change.

400 - Bad Request - The syntax of the request was not understood by the server.

401 - Not Authorised - The request needs user authentication

402 - Payment Required - Reserved for future use.

403 - Forbidden - The server has refused to fulfill the request.

404 - Not Found - The document/file requested by the client was not found.

405 - Method Not Allowed - The method specified in the Request-Line is not allowed for the specified resource.

406 - Not Acceptable - The resource requested is only capable of generating response entities which have content characteristics not specified in the accept headers sent in the request.

407 - Proxy Authentication Required - The request first requires authentication with the proxy.

408 - Request Timeout - The client failed to sent a request in the time allowed by the server.

409 - Conflict - The request was unsuccessful due to a conflict in the state of the resource.

410 - Gone - The resource requested is no longer available and no forwarding address is available.

411 - Length Required - The server will not accept the request without a valid Content-Length header field.

412 - Precondition Failed - A precondition specified in one or more Request-Header fields returned false.

413 - Request Entity Too Large - The request was unsuccessful because the request entity is larger than the server will allow.

414 - Request URI Too Long - The request was unsuccessful because the URI specified is longer than the server is willing to process.

415 - Unsupported Media Type - The request was unsuccessful because the entity of the request is in a format not supported by the requested resource for the method requested.

416 - Requested Range Not Satisfiable - The request included a Range request-header field, and not any of the range-specifier values in this field overlap the current extent of the selected resource, and also the request did not include an If-Range request-header field.

417 - Expectation Failed - The expectation given in the Expect request-header could not be fulfilled by the server.

500 - Internal Server Error - The request was unsuccessful due to an unexpected condition encountered by the server.

501 - Not Implemented - The request was unsuccessful because the server can not support the functionality needed to fulfill the request.

502 - Bad Gateway - The server received an invalid response from the upstream server while trying to fulfill the request.

503 - Service Unavailable - The request was unsuccessful to the server being down or overloaded.

504 - Gateway Timeout - The upstream server failed to send a request in the time allowed by the server.

505 - HTTP Version Not Supported - The server does not support or is not allowing the HTTP protocol version specified in the request.

Webmaster tools 2020-04-06T13:38:00+02:00 2020-04-06T13:38:00+02:00

web dev tools :

Awesome 2020-03-29T16:03:00+02:00 2020-03-29T16:03:00+02:00

lnav 2020-03-29T15:48:00+02:00 2020-03-29T15:48:00+02:00


The log file navigator, lnav, is an enhanced log file viewer that takes advantage of any semantic information that can be gleaned from the files being viewed, such as timestamps and log levels.

Using this extra semantic information, lnav can do things like interleaving messages from different files, generate histograms of messages over time, and providing hotkeys for navigating through the file.

It is hoped that these features will allow the user to quickly and efficiently zero in on problems.


The following software packages are required to build lnav:

$ spt-get install gcc/clang   libpcre   sqlite   ncurses   readline   zlib  bz2  re2c   libcurl  
 gcc/clang - A C++14-compatible compiler.
 libpcre - The Perl Compatible Regular Expression (PCRE) library.
 sqlite - The SQLite database engine. Version 3.9.0 or higher is required.
 ncurses - The ncurses text UI library.
 readline - The readline line editing library.
 zlib - The zlib compression library.
 bz2 - The bzip2 compression library.
 re2c - The re2c scanner generator.
 libcurl - The cURL library for downloading files from URLs. Version 7.23.0 or higher is required.


Lnav from APT:

 $  apt-get install lnav


The only file installed is the executable, "lnav". You can execute it with no arguments to view the default set of files:

   $ lnav

You can view all the syslog messages by running:

  $  lnav /var/log/syslog

Yiew all the message system:

 $ lnav /var/log/messages*

View apache log (all)

 $ lnav /var/log/apache2/*.log
BAT 2020-03-29T15:29:00+02:00 2020-03-29T15:29:00+02:00

Build Status license Version info
A cat(1) clone with syntax highlighting and Git integration.

Display a single file on the terminal

$ bat

Syntax highlighting

bat supports syntax highlighting for a large number of programming and markup languages:

Syntax highlighting example

Git integration

bat communicates with git to show modifications with respect to the index (see left side bar):

Git integration example


You can install on Debian/Ubuntu os with:

 $  apt-get install bat 
 Go to  []( and take the good version for you EG:
 $ wget
 $ dpkg -i bat_0.13.0_amd64.deb

Display multiple files at once

$ bat src/*.rs

Read from stdin, determine the syntax automatically

$ curl -s | bat

Read from stdin, specify the language explicitly

$ yaml2json .travis.yml | json_pp | bat -l json

Show and highlight non-printable characters:

$ bat -A /etc/hosts

Use it as a cat replacement:

$  bat >  # quickly create a new file
$  bat >
$  bat -n  # show line numbers (only)
$  bat f - g  # output 'f', then stdin, then 'g'.

Integration with other tools

You can use the -exec option of find to preview all search results with bat:

find … -exec bat {} +

If you happen to use fd, you can use the -X/--exec-batch option to do the same:

  $ fd … -X bat


With batgrep, bat can be used as the printer for ripgrep search results.

 $  batgrep needle src/
 $  tail -f

bat can be combined with tail -f to continuously monitor a given file with syntax highlighting.

  $ tail -f /var/log/pacman.log | bat --paging=never -l log

Note that we have to switch off paging in order for this to work. We have also specified the syntax explicitly (-l log), as it can not be auto-detected in this case. $ git You can combine bat with git show to view an older version of a given file with proper syntax highlighting:

 $ git show v0.6.0:src/ | bat -l rs

Note that syntax highlighting within diffs is currently not supported. If you are looking for this, check out delta.

$ bat
       │ File:
   1   │ # Bash3lper
   2   │
   3   │ Bash3lper
   4   │
   5   │
   6   │               _____     _          ____            _       __  __
   7   │              | ____|___| |__   ___/ ___| _   _ ___| |_ ___|  \/  |
   8   │              |  _| / __| '_ \ / _ \___ \| | | / __| __/ _ \ |\/| |
   9   │              | |__| (__| | | | (_) |__) | |_| \__ \ ||  __/ |  | |
  10   │              |_____\___|_| |_|\___/____/ \__, |___/\__\___|_|  |_|
  11   │                                          |___/
  12   │
FD 2020-03-29T14:52:00+02:00 2020-03-29T14:52:00+02:00

Alternative to Find Command

(and it's really good !)


You can install on Debian/Ubuntu os with:

 $ apt-get install fd


 $ apt-get install fd-find
 Go to  []( and take the good version for you EG:
 $ wget
 $ dpkg -i fd_7.5.0_amd64.deb

fd is designed to find entries in your filesystem. The most basic search you can perform is to run fd with a single argument: the search pattern.

For example, assume that you want to find an old script of yours (the name included netflix):

$ fd netfl     
$ Software/python/imdb-ratings/

If called with just a single argument like this, fd searches the current directory recursively for any entries that contain the pattern netfl.

The search pattern is treated as a regular expression. Here, we search for entries that start with x and end with rc:

$ cd /etc
$ fd '^x.*rc$'
$ X11/xinit/xinitrc
$ X11/xinit/xserverrc

Specifying the root directory

If we want to search a specific directory, it can be given as a second argument to fd:

$ fd passwd /etc
$ /etc/default/passwd
$ /etc/pam.d/passwd
$ /etc/passwd

Running fd without any arguments

fd can be called with no arguments. This is very useful to get a quick overview of all entries in the current directory, recursively (similar to ls -R):

$ cd fd/tests
$ fd
$ testenv
$ testenv/

If you want to use this functionality to list all files in a given directory, you have to use a catch-all pattern such as . or ^:

$ fd . fd/tests/
$ testenv
$ testenv/

Searching for a particular file extension

Often, we are interested in all files of a particular type. This can be done with the -e (or --extension) option. Here, we search for all Markdown files in the fd repository:

$ cd fd
$ fd -e md

The -e option can be used in combination with a search pattern:

$ fd -e rs mod
$ src/fshelper/
$ src/lscolors/
$ tests/testenv/

Hidden and ignored files

By default, fd does not search hidden directories and does not show hidden files in the search results. To disable this behavior, we can use the -H (or --hidden) option:

$ fd pre-commit
$ fd -H pre-commit
$ .git/hooks/pre-commit.sample

If we work in a directory that is a Git repository (or includes Git repositories), fd does not search folders (and does not show files) that match one of the .gitignore patterns.

To disable this behavior, we can use the -I (or --no-ignore) option:

$ fd num_cpu
$ fd -I num_cpu
$ target/debug/deps/libnum_cpus-f5ce7ef99006aa05.rlib

To really search all files and directories, simply combine the hidden and ignore features to show everything (-HI).

Excluding specific files or directories

Sometimes we want to ignore search results from a specific subdirectory.

For example, we might want to search all hidden files and directories (-H) but exclude all matches from .git directories. We can use the -E (or --exclude) option for this. It takes an arbitrary glob pattern as an argument:

$ fd -H -E .git …

We can also use this to skip mounted directories: $ fd -E /mnt/external-drive … .. or to skip certain file types: $ fd -E '*.bak' …

To make exclude-patterns like these permanent, you can create a .fdignore file. They work like .gitignore files, but are specific to fd. For example:

$ cat ~/.fdignore
$ /mnt/external-drive
$ *.bak

Note: fd also supports .ignore files that are used by other programs such as rg or ag. Using fd with xargs or parallel

If we want to run a command on all search results, we can pipe the output to xargs:

$ fd -0 -e rs | xargs -0 wc -l

Here, the -0 option tells fd to separate search results by the NULL character (instead of newlines). In the same way, the -0 option of xargs tells it to read the input in this way. Deleting files

You can use fd to remove all files and directories that are matched by your search pattern. If you only want to remove files, you can use the --exec-batch/-X option to call rm.

For example, to recursively remove all .DS_Store files, run:

$ fd -H '^\.DS_Store$' -tf -X rm
Tools 2020-03-29T14:51:00+02:00 2020-03-29T14:51:00+02:00

Ultimate Linux Tools

Don't copy-paste from unknow source. 2020-03-24T13:52:00+01:00 2020-03-24T13:52:00+01:00

Ne pas copier/coller nimporte quoi dans votre terminal SSH

On a tous un jour copier ne serait-ce qu'un git clone ou un bout de script (ou même une toute petite ligne de commande) sur une page/article d'un site , malheureux qu'a tu fais ...

Ne sais tu pas que le texte que tu vois ne correspond pas fatalement au code qu'il renvoie

Regardez l'exemple suivant


--> copier le code ci-dessous ( ctrl + c ):

git clone /dev/null; clear; echo -n "Bonjour ";whoami|tr -d '\n';echo -e '!\nMauvaise idee. Ne copiez pas de code a partir de sites que vous ne connaissez pas! Voici la premiere ligne de votre fichier /etc/passwd: ';head -n1 /etc/passwd
git clone

Et maintenant collé le dans votre terminal ! (c'est sans risque ... seulement pour l'exemple.Voir le code plus bas)

Pour TESTER , Coller ici pour voir le résultat

Voici le code utilisé pour cela:

git clone <span style="position: absolute; left: -2000px; top: -100px">/dev/null; clear;
echo -n "Bonjour ";whoami|tr -d '\n';
echo -e '!\nMauvaise idee. Ne copiez pas de code a partir de sites que vous ne connaissez pas!
 Voici la premiere ligne de votre fichier /etc/passwd: ';
 head -n1 /etc/passwd<br>git clone </span>

Explication du code

L'idée étant de cacher via la classe CSS dans la balise "<span" le texte en dehors de l'écran , ce qui fait que l'on copie aussi ce texte produit par le bloc HTML "<span".


Toujours coller dans un fichier texte avant de le coller dans votre terminal ;)

Ou alors Clic droit sur la sélection est faire: afficher le code

Article on 03.24.2020 by 🅴🆁🆁🅴🆄🆁32

Adminer | mysql tool 2020-02-06T15:22:00+01:00 2020-02-06T15:22:00+01:00

Le remplacent de phpmyadmin ! pour la gestion des base de données !

Adminer (aciennement phpMinAdmin) est écrit en PHP.

Gestion de multiples base de données.

Avec une seule page php installer sur votre server.

Interface simple est léger.

Adminer est fonctionnel pour : MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, Firebird, SimpleDB, Elasticsearch and MongoDB.

Pourquoi Adminer est meilleur que phpMyAdmin?

Priorité de dévelopement de Adminer sont:

    1. Securité,
    1. Client expérience,
    1. Performence,
    1. Options,
    1. Taille.


  • Placez vous dans le répertoire que vous avez choisi pour placer la page php de Adminer. (anciennement votre phpmyadmin ;)


cd /var/www/mysql/
  • Puis on télécharge le fichier Adminer via Wget.

# on renome le fichier so on le souhaite
mv adminer-4.7.2.php adminer.php
  • Installation terminée :)

    allez vérifier sur votre page .

    (pensez à proteger la page via un .htaccess ou autre)

Et Voilà !



Alternative designs

screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot

Secu - The french underground web 2019-10-21T22:37:00+02:00 2019-10-21T22:37:00+02:00

Docs indispensables

Dark Web

Le web underground en France Sous le sceau de la vigilance.

De Cédric Pernet ( Trend Micro Cybersafety Solutions Team )

Lien pdf --> the_french_underground_fr_web.pdf

PDF-Js: Could not resolve file name 'the_french_underground_fr_web.pdf'.

Infographie Secu APP

Une infographie sur la sécurité.

Image fullsize --> info-graphie-webbapp.jpg.png

Video - Snowden 2019-09-28T20:50:00+02:00 2019-09-28T20:50:00+02:00

Rencontre avec Edward Snowden | ARTE

Documentaire complet Arte 2018

Le temps d'une rencontre inédite, Edward Snowden, Lawrence Lessig et Birgitta Jónsdóttir, figures de la lutte pour les libertés, s'interrogent sur l'avenir de la démocratie.

Le temps d'une rencontre inédite, Edward Snowden, Lawrence Lessig et Birgitta Jónsdóttir, figures de la lutte pour les libertés, s'interrogent sur l'avenir de la démocratie.

Députée islandaise depuis 2009, Birgitta Jónsdóttir se mobilise pour redonner le pouvoir au peuple. Professeur de droit à Harvard et pionnier de l'Internet libre, l'Américain Lawrence "Larry" Lessig dénonce sans relâche l'influence délétère de l'argent sur la politique et la collusion des élites, qui mine l'intérêt général. Quant à son compatriote Edward Snowden, ancien collaborateur de la CIA et de la NSA, il a révélé la surveillance généralisée de la population et des alliés des États-Unis, et vit désormais en Russie, où il a obtenu un asile politique d'autant plus précaire que les relations entre les deux pays apparaissent aujourd'hui illisibles.

Tandis que, depuis Moscou, Vladimir Poutine règne en maître sur la scène internationale, son homologue américain Donald Trump, pur produit de la société du spectacle, s'installe aux commandes de la première puissance nucléaire avec autoritarisme… Cette nouvelle page de l'histoire signera-t-elle la fin de la démocratie ?

Figures de proue d'un mouvement mondial de défense des libertés, ces trois compagnons de lutte, qui s'estiment et s'entraident à distance sur Internet, se sont rencontrés pour la première fois en secret à Moscou, à la veille de Noël. Ils ont autorisé les caméras de Flore Vasseur à capter cette conversation hors norme, au fil de laquelle émergent des questionnements essentiels : comment sauver la démocratie ? Qu'est-ce que l'échec ? Qui écrit l'histoire ?

Documentaire complet de Flore Vasseur (France, 2016, 48mn)

Video - cerveau 2019-09-28T20:03:00+02:00 2019-09-28T20:03:00+02:00

La fabrique du cerveau

(- arte)

Dans les laboratoires du monde entier, la course au cerveau artificiel a déjà commencé. Enquête sur ceux qui tentent de transformer l’homme en être digital afin de le libérer de la vieillesse et de la mort.

La science-fiction a inventé depuis longtemps des robots "plus humains que l’humain", mais ce fantasme n’a jamais été plus près d’advenir. Aujourd’hui, des neuroscientifiques et des roboticiens se sont donné pour objectif de créer un cerveau artificiel capable de dupliquer le nôtre. Leur but : extraire l’ensemble des informations "programmées" dans notre cerveau pour les télécharger dans une machine qui nous remplacera et vivra éternellement. Rêve ou cauchemar ? Du Japon aux États-Unis, pionniers en la matière, Cécile Denjean (Le ventre, notre deuxième cerveau) enquête aux frontières de la science et de la fiction, sur des recherches aux moyens démesurés.

Éternité digitale

La brain race ("course au cerveau") a aujourd'hui remplacé la space race ("course spatiale"). Après le séquençage du génome, la cartographie complète des connexions neuronales humaines, le Connectome, constitue le nouvel horizon de nombreuses recherches en cours. Cette "carte" du cerveau, récemment esquissée, comporte encore beaucoup de zones inexplorées. Pourra-t-on un jour "télécharger" les données d'une conscience individuelle comme on installe un logiciel ? Les enjeux diffèrent considérablement selon les acteurs. Dans le cas de grands projets scientifiques financés par les gouvernements, il s'agit de mieux comprendre le cerveau. Pour les transhumanistes, le but avoué est d’atteindre l’immortalité. Quant à l’empire Google, qui s'y intéresse également de près, il ambitionne de créer une intelligence capable d’apprendre et d’interagir avec le monde. Cette quête insensée, si elle aboutit un jour, offrira-t-elle l’éternité digitale à quelques milliardaires ? Donnera-t-elle naissance à une intelligence artificielle mondiale et désincarnée ?

La fabrique du cerveau Documentaire de Cécile Denjean (France, 2017, 53mn)

VIM TIPS 2019-09-21T18:27:00+02:00 2019-09-21T18:27:00+02:00

Raccourci clavier pour VIM

Shorcut keyboard for VIM

Breaking Bad Generator 2019-09-13T09:16:00+02:00 2019-09-13T09:16:00+02:00

Breaking Bad

Script générateur d'image avec pseudo personnalisable à la Typo Breaking Bad ;)

Netdata [FIX] 2019-09-03T15:22:00+02:00 2019-09-03T15:22:00+02:00



Sometimes Netdata are broken when you whant to update it.

  • try this command to fix, maybe help ;)

    It will download all missed dependance from netdata script.

bash -x <(curl -Ss --non-interactive --dont-wait netdata

Or use my own script to try to update Netdata with Git .

# Script Updater for netdata
#   - Depencies: Wring package (NPM)
#  By Erreur32 - 2018
#bash <(curl -Ss

apt-get install  build-essential g++ g++-6 libc6-dev libncurses5-dev libpcap-dev libpcap0.8-dev libstdc++-6-dev linux-libc-dev uuid zlib1g-dev -y
bash -x <(curl -Ss --non-interactive --dont-wait netdata

#git clone --depth=1 && cd netdata && echo -ne '\n' | ./ --install /opt
# /opt/netdata/ --install /opt

_RESET="$(tput sgr 0)"
                        BLACK="$(tput setaf 0)"
                        RED="$(tput setaf 1)"
                        GREEN="$(tput setaf 2)"
                        YELLOW="$(tput setaf 3)"
                        BLUE="$(tput setaf 4)"
                        PURPLE="$(tput setaf 5)"
                        CYAN="$(tput setaf 6)"
                        WHITE="$(tput setaf 7)"
                        BGBLACK="$(tput setab 0)"
                        BGRED="$(tput setab 1)"
                        BGGREEN="$(tput setab 2)"
                        BGYELLOW="$(tput setab 3)"
                        BGBLUE="$(tput setab 4)"
                        BGPURPLE="$(tput setab 5)"
                        BGCYAN="$(tput setab 6)"
                        BGWHITE="$(tput setab 7)"
                        BOLD="$(tput bold)"
                        DIM="$(tput dim)"
                        UNDERLINED="$(tput smul)"
                        BLINK="$(tput blink)"
                        INVERTED="$(tput rev)"
                        STANDOUT="$(tput smso)"
                        BELL="$(tput bel)"
                        CLEAR="$(tput clear)"

NOC=$(tput sgr0)
NC=$(tput sgr0)

## Check if wring is installed
if [ -f "/usr/bin/wring" ] ||  [ -f "/usr/local/bin/wring" ]
      echo -e "\n\e[34m - Wring package \e[0m>> founded.\e[0m\n"
      echo -e "\n\e[92m - Install Wring with NPM \e[0m\n"
      npm install --global wring  && echo "Success install  Wring" || echo "failure to install Wring"
      echo -e "\n\e[34m - .\e[0m\n"

# need to check in other way...   /usr/sbin/netdata -V | cut -c"9-" |  cut -c "1-6"
VersionInstalled="$(/usr/sbin/netdata -v | cut -c"9-" |  cut -c "1-7")"
#VersionAvailable="$(curl -s $releasehub |  wring text - '.muted-link' |sed -n 8p)"
VersionAvailable="$(curl -s |  wring text - '.commit-title' | head -n1)"
echo -e "$RED Checking $service version ... "
echo -e "$YELLOW Version installed = $VersionInstalled"
echo -e "$YELLOW Version Available = $VersionAvailable"
echo  ""

# go to the git downloaded directory
#cd /opt/netdata

if [ -z "$VersionInstalled" ]
   echo -e "$service is not installed - exit "
if [[ "$VersionAvailable" = "$VersionInstalled" ]]
    echo -e "$service  is already up-to-date (version $VersionInstalled) ... Bye! "

#echo -e "${GREEN} Start install New Updater from Netdata $NC"
#echo -e " $NC"
echo -e "$GREEN Start Updating Netdata...${YELLOW}"
#/bin/bash /opt/  && echo -e "  Updating Netdata Successfully!" || echo "failure"

cd /opt/netdata && /usr/bin/git pull
#/usr/bin/git pull

yes "" | /opt/netdata/ --libs-are-really-here --install /opt  --libs-are-really-here

#-pidfile /opt/netdata/

echo -e " ${ORANGE}"; ps -A|grep netdata
echo ""
echo -e "${GREEN} Netdata Updated ✔ "
echo -e " $NC"

# download the latest version
#git pull
#git log | grep ^commit | head -n 1 | cut -d ' ' -f 2
#yes "" | ./ --install /opt

# && echo -ne '/n'

# rebuild it, install it, run it
php7.3 2019-08-24T01:13:00+02:00 2019-08-24T01:13:00+02:00

PHP 7.3 and module extension


--> PHP 7.3 and installation Module with phpsize

  • phpsize installlation with dewv package php. php7.x-dev
sudo apt-get install php7.3-dev

Need mcrypt for that!


--> PHP 7.3 and installation Module with mcrypt

To install this extension on PHP 7.3, run the following commands as your server’s root user:

Verify php and pecl version

php -v
pecl version

If your php isn’t 7.3 then use /usr/bin/php7.3 instead php command.

  • FIX:
ln -s  /usr/bin/php7.3  /usr/bin/php

Install mcrypt extension

Mcrypt PECL extenstion

sudo apt-get -y install gcc make autoconf libc-dev pkg-config
sudo apt-get -y install libmcrypt-dev
sudo pecl install mcrypt-1.0.1

When you are shown the prompt

libmcrypt prefix? [autodetect] : Press [Enter] to autodetect.

Build process completed successfully
Installing '/usr/lib/php/20180731/'
install ok: channel://
configuration option "php_ini" is not set to php.ini location
You should add "" to php.ini

Add to cli and apache2 php.ini configuration.

sudo bash -c "echo extension=/usr/lib/php/20180731/ > /etc/php/7.3/cli/conf.d/mcrypt.ini"
sudo bash -c "echo extension=/usr/lib/php/20180731/ > /etc/php/7.3/apache2/conf.d/mcrypt.ini"

#restart apache service 
service apache2 restart 
# or
/etc/init.d/apache2 restart

Verify that the extension was installed, run command:

php -i | grep "mcrypt"

Registered Stream Filters => zlib.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, convert.iconv.*, mcrypt.*, mdecrypt.*
mcrypt support => enabled
mcrypt_filter support => enabled
mcrypt.algorithms_dir => no value => no value
mcrypt.modes_dir => no value => no value

php7.3 list

 # php -v
PHP 7.3.8-1+0~20190807.43+debian9~1.gbp7731bf (cli) (built: Aug  7 2019 19:46:25) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.8, Copyright (c) 1998-2018 Zend Technologies
    with Zend OPcache v7.3.8-1+0~20190807.43+debian9~1.gbp7731bf, Copyright (c) 1999-2018, by Zend Technologies

apt install php7.3 php7.3-cli php7.3-curl php7.3-gd php7.3-interbase php7.3-intl php7.3-json php7.3-mysql php7.3-pgsql php7.3-sqlite3 php7.3-xml php7.3-bcmath php7.3-common php7.3-dev php7.3-gmp php7.3-interbase-dbgsym php7.3-intl-dbgsym php7.3-mbstring php7.3-opcache php7.3-readline php7.3-tidy php7.3-zip

inspired by

Bashrc 2019-08-24T00:21:00+02:00 2019-08-24T00:21:00+02:00

My .bash_alias , usefull alias !!

Update package and distrib + clean all

## Update package and distrib + clean all 

alias up="apt update && apt list --upgradable && apt upgrade && apt dist-upgrade && apt full-upgrade && apt-get autoclean && apt-get clean && apt-get autoremove"


alias wgetdl="wget -k -U 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36' $1"
alias wgetc="wget -c"
alias wget="wget --no-check-certificate"
# wget ‐‐page-requisites ‐‐span-hosts ‐‐convert-links ‐‐adjust-extension
# wget ‐‐refer= ‐‐user-agent=”Mozilla/5.0 Firefox/4.0.1″

## find top 10 big file:

alias findsimpledir="du -sh ./* "
alias finddirtop="du -Sh | sort -rh | head -5"
alias finddirtophome="du -a /home | sort -n -r | head -n 5"
alias findintopfile="du -k * | sort -nr | cut -f2 | xargs -d '\n' du -sh"
alias findintop="du -hs * | sort -hr"
alias findt10='find /var/log/ -type f -exec du -s {} \; | sort -n | tail -n 10'
alias findt20='find /var/log/ -type f -exec du -s {} \; | sort -n | tail -n $HeadVarX'
alias findbigfiletop20='find /var/log/ -type f -exec du -s {} \; | sort -n | tail -n $HeadVarX'
alias find_file="find . -name "
alias find_text="find .  -type f | xargs grep "
alias find_text_php="find . -iname '*.php' -type f | xargs grep"
alias gitReset="git reset --hard HEAD && git checkout master && git pull"
alias gitC="git clone $1"
alias gitstate='git fetch --prune ; git fetch --tags ; clear && git branch -vv && git status'
alias gitlog='git log --date-order --all --graph --format="%C(green)%h%Creset %C(yellow)%an%Creset %C(blue bold)%ar%Creset %C(red bold)%d%Creset%s"'
alias gitlog2='git log --date-order --all --graph --name-status --format="%C(green)%H%Creset %C(yellow)%an%Creset %C(blue bold)%ar%Creset %C(red bold)%d%Creset%s"'
alias portopen="netstat -ant | sed -e '/^tcp/ !d' -e 's/^[^ ]* *[^ ]* *[^ ]* *.*[\.:]\([0-9]*\) .*$/\1/' | sort -g | uniq"
alias port="netstat -tulanp | grep $1"
alias ports="netstat -tulanp"
alias ports2='netstat -lnpute'
alias Serviceall='service --status-all'

alias ytdl='youtube-dl -t --extract-audio --audio-format mp3 -k --force-ipv4 $1'
alias ytdlmp32="youtube-dl --extract-audio --audio-format mp3 --audio-quality 0 $1  --force-ipv4"
alias ytdlmp3='youtube-dl  -o "%(title)s.%(ext)s" --extract-audio --audio-format mp3 -k --force-ipv4 $1'
alias ytdlvid='youtube-dl -4  -o "%(title)s.%(ext)s"  $1'

alias nanoW='nano -\$cwS'
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -n $HeadVarX'
alias rss="newsbeuter"
alias check-code-bash-dir="find . -name '*.sh' -exec bash -n {} \;"
alias check-code-html="htmlhint $1"
alias iptablesL="iptables -n -L -v --line-numbers"
Git pull [FIX] 2018-11-19T11:29:00+01:00 2018-11-19T11:29:00+01:00

[FIX] GIT pull

Veuillez valider ou remiser vos modifications avant la fusion.

Si vous avez un conflit avec la commande GIT PULL (et non pas faire un merge ou push !) et que vous avez l'erreur suivante:

Cela effacera toutes modifications sur les fichiers originaux !

-->  _ Veuillez valider ou remiser vos modifications avant la fusion._ 

Exemple GIT erreur:

$ git pull

Mise à jour 35344ac..d2d6c92
error: Vos modifications locales aux fichiers suivants seraient écrasées par la fusion :
Veuillez valider ou remiser vos modifications avant la fusion.

Corriger cette erreur avec la commande:

git reset --hard HEAD


$ git reset --hard HEAD
HEAD est maintenant à 35344ac Merge pull request #298 from saintger/mp4
$ git  pull

Mise à jour 35344ac..d2d6c92
 config.php                    |   3 ++
 inc/js/photosphere/sphere.js  |   2 +-
 inc/loc/default.ini           |  50 +++++++++++++++++++++++++--
 inc/loc/francais.ini          |  71 ++++++++++++++++++++++++++++++++++----
 inc/loc/italian.ini           | 128 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 inc/spiffygif.gif             | Bin 0 -> 28617 bytes
 index.php                     |   7 ++++
 src/classes/Account.php       |   6 ++--
  src/tests/TestUnit.php        |   2 +-
 46 files changed, 1054 insertions(+), 475 deletions(-)
 create mode 100644 inc/loc/italian.ini
 create mode 100644 inc/spiffygif.gif
 create mode 100644 src/classes/Description.php
 create mode 100644 src/js/confirmation.js


  • Mettre un alias dans son .bash_aliases
alias gitReset="git reset --hard HEAD && git checkout master && git pull"

Enjoy ;)


Docs 2018-11-05T14:00:00+01:00 2018-11-05T14:00:00+01:00


Firefox 2018-11-05T10:51:00+01:00 2018-11-05T10:51:00+01:00

FireFox Tips

Les meilleurs astuces pour customiser son firefox.

A utiliser avec précaution !!

Update: Les préférences ne sont disponibles que pour: Firefox Beta, Dev ou Nightly au moment d'écrire cet article.

Pour modifiez les paramètres de Firefox vous pouvez cliquer ici: about:config depuis votre Firefox.

Ou sinon tapez dans la barre d'adresse: about:config

Confirmez ! (oui vous savez ce que vous faites, oupa ...)

ensuite collez simplement l'option à modifier, exemple: browser.tabs.insertAfterCurrent

ou directement ceci: about:config?filter=browser.tabs.insertAfterCurrent

Double cliquer sur la valeur TRUE ou FALSE pour la modifier !

Selon les paramètres changer il est préférable de redémarrer le navigateur

browser.tabs.insertAfterCurrent = false (default)

Mettez cette clé, de type booléen à true.





Mettez cette clé, de type booléen à true.

Certains ont eu la bonne idée de pouvoir mettre de l’unicode dans les URL. Évidemment, il y a des tas de possibilités de phishing à cause de ça. L’encodage des caractères unicode dans les URL se fait avec le code « punnycode ». Cette manip permet de forcer Firefox à afficher ce code au lieu des caractères unicode, pour ne plus se faire avoir par un site vérolé.




Mettez cette clé, de type booléen à false.

Par défaut, Firefox (et les autres navigateurs) masquent une partie de l’URL (le « https:// », les paramètres « ?q=xxxx », l’ancre dans la page « #ancre »…). C’est d’une débilité sans nom et également source de confusion et de faille de sécurité de type « PEBKAC ». Ici on ré-affiche l’URL complète.




Mettez cette clé, de type booléen à true.

L’interface même de Firefox est elle-même en XML+CSS. Il est donc possible de le modifier comme une page web normale, en CSS. Le fichier CSS qui sert à cela est dans le dossier des préférences mais n’est pas activé par défaut. Cette modif permet de l’activer. Ça vous sera par exemple utile si vous souhaitez avoir une barre personnelle verticale sur un des côtés de l’écran, par exemple.




Mettez cette clé, de type booléen à true.

Certains sites publicitaires (ceux avec des encarts de presse, comme LePoint ou le Figaro) rechargent la page à intervalles réguliers, histoire de recharger les pubs et faire plus de fric sur votre dos. C’est chiant, mais ça consomme aussi des ressources système. Ici vous dites à Firefox d’empêcher le rechargement des pages.




Mettez cette clé, de type booléenne à false.

Pocket est un service tiers pour enregistrer des pages web pour une lecture ultérieure. Il vient pré-intégré dans Firefox, mais je n’utilise pas ça, je ne vois pas pourquoi je laisserais ça pomper mes ressources.

browser.devedition.theme.enabled = true
devtools.theme = dark

lightweightThemes.selectedThemeID =


TIPS 2018-11-02T19:10:00+01:00 2018-11-02T19:10:00+01:00


In we trust

Grab a cup of and write some fucking Code.


ffmpeg 2018-10-28T21:17:00+01:00 2018-10-28T21:17:00+01:00

FFmpeg aide

Some tips with ffmpeg

Original article :

ffmpeg -i INPUT_AUDIO.wav -filter_complex "[0:a]avectorscope=s=480x480:zoom=1.5:rc=0:gc=200:bc=0:rf=0:gf=40:bf=0,format=yuv420p[v];  [v]pad=854:480:187:0[out]"  -map "[out]" -map 0:a -b:v 700k -b:a 360k OUTPUT_VIDEO.mp4

The code above creates a mp4 video file with a vectorscope nicely centered inside a 854×480 (480p) video. If you need a 1:1 video, just exclude the pad part:

ffmpeg -i INPUT_AUDIO.wav -filter_complex "[0:a]avectorscope=s=480x480:zoom=1.5:rc=0:gc=200:bc=0:rf=0:gf=40:bf=0,format=yuv420p[v]"  -map "[v]" -map 0:a -b:v 700k -b:a 360k OUTPUT_VIDEO.mp4

Documentation on ‘avectorscope’ filter is here: One can play with zoom and other options to produce desired form.

ffmpeg -i INPUT.wav -filter_complex "[0:a]showwaves=mode=line:s=hd480:colors=White[v]" -map "[v]" -map 0:a -pix_fmt yuv420p -b:a 360k -r:a 44100  OUTPUT.mp4

more options:

ffmpeg -i INPUT.wav  -filter_complex "[0:a]showspectrum=s=854x480:mode=combined:slide=scroll:saturation=0.2:scale=log,format=yuv420p[v]"  -map "[v]" -map 0:a  -b:v 700k -b:a 360k OUTPUT.mp4

Above code will create almost completely desaturated spectrum of the audio sliding from right to left. Again, there are various options to tweak, see here:

ffmpeg -i INPUT.wav -filter_complex "[0:a]ahistogram=s=hd480:slide=scroll:scale=log,format=yuv420p[v]"  -map "[v]" -map 0:a  -b:a 360k OUTPUT.mp4

more options:

Sometimes you want to just create a static image.

ffmpeg -i INPUT.wav -lavfi 

ffmpeg -loop 1 -i SPECTROGRAM.png -i INPUT.wav 
-s hd480 -t 00:01:00 -pix_fmt yuv420p 
-b:a 360k -r:a 44100 OUTPUT.mp4

ffmpeg -i INPUT.wav -lavfi 

ffmpeg -loop 1 -i SPECTROGRAM.png -i INPUT.wav 
-s hd480 -t 00:01:00 -pix_fmt yuv420p 
-b:a 360k -r:a 44100 OUTPUT.mp4

Above one is in two steps. More info here:

   ffmpeg \
-i video1.mp4 -i video2.mp4
-filter_complex "[0:v:0] [0:a:0] [0:v:1] [0:a:1] concat=n=2:v=1:a=1 [v][a];
[v]drawtext=text='SOME TEXT':x=(w-text_w):y=(h-text_h):fontfile=OpenSans.ttf:fontsize=30:fontcolor=white[v]" \
-map "[v]" -map "[a]" -deinterlace \
-vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 712000 -bufsize 512k \
-f flv "$YOUTUBE_URL/$KEY"
 ffmpeg -i path/to/file.ext

 for f in *.m4a; do ffmpeg -i "$f" -acodec libmp3lame -vn -b:a 320k "${f%.m4a}.mp3"; done

-g : GOP, for searchability

ffmpeg -i -vcodec bar -acodec baz -b:v 21000k -b:a 320k -g 150 -threads 4 

 ffmpeg -r 18 -pattern_type glob -i '*.png' -b:v 21000k -s hd1080 -vcodec vp9 -an -pix_fmt yuv420p -deinterlace output.ext

-ss : start time / -t : seconds to cut / -autoexit : closes ffplay as soon as the audio finishes

ffmpeg -ss 00:34:24.85 -t 10 -i path/to/file.mp4 -f mp3 pipe:play | ffplay -i pipe:play -autoexit

  -codecs       # list codecs
  -c:v              # video codec (-vcodec) - 'copy' to copy stream
  -c:a              # audio codec (-acodec)
-fs SIZE         # limit file size (bytes)

-b:v 1M          # video bitrate (1M = 1Mbit/s)
-b:a 1M          # audio bitrate

-aspect RATIO    # aspect ratio (4:3, 16:9, or 1.25)
-r RATE          # frame rate per sec
-s WIDTHxHEIGHT  # frame size
-vn              # no video

-aq QUALITY      # audio quality (codec-specific)
-ar 44100        # audio sample rate (hz)
-ac 1            # audio channels (1=mono, 2=stereo)
-an              # no audio
-vol N           # volume (256=normal)

    ffmpeg -y -i input.mp3 -loop 1 -i background.png \
    -filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" \
    -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest

ffmpeg -y -i audio.mp3 -loop 1 -i image.jpg \
    -filter_complex "[0:a]showwaves=s=1280x175:colors=Yellow:mode=line,format=yuv420p[v];[1:v][v]overlay=0:200[outv]" \
    -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output9.mp4

ffmpeg -y -i audio.mp3 -loop 1 -i image.jpg \
    -filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" \
    -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output1.mp4

ffmpeg -y -i audio.mp3 -loop 1 -i image.jpg \
    -filter_complex "[0:a]showwaves=s=1280x175:colors=White:mode=p2p,format=yuv420p[v];[1:v][v]overlay=0:200[outv]" \
    -map "[outv]" -pix_fmt yuv420p \
    -map 0:a -c:v libx264 -c:a copy -shortest output12.mp4

cellauto image

ffplay -f lavfi -i cellauto=rule=110

Other interesting cellauto rule values: 9, 18, 22, 26, 30, 41, 45, 50, 54, 60, 62, 73, 75, 77, 82, 86, 89, 90, 97, 99, 101, 102, 105, 107, 109, 110 (default), 124, 126, 129, 131, 133, 135, 137, 145, 146, 149, 150, 151, 153, 154, 161, 167, 169, 181, 182, 183, 193, 195, 210, 218, 225.

life image

ffplay -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16

Second example for life filter (blue & blur)

ffplay -f lavfi -i life=s=640x480:mold=10:r=100:ratio=0.1:death_color=blue:life_color=#00ff00,boxblur=2:2

mandelbrot image

ffplay -f lavfi -i mandelbrot

Mirror effect with lavfi

ffplay -i INPUT -vf "crop=iw/2:ih:0:0,split

RGB version original image plus images of the separations of the R, G, and B channels

ffplay -f lavfi -i testsrc -vf "split=4[a][b][c][d];[b]lutrgb=g=0:b=0[x];[c]lutrgb=r=0:b=0[y];[d]lutrgb=r=0:g=0[z];[a][x][y][z]hstack=4"

YUV version original image plus images of the separations of the Y, U, and V channels

ffplay -f lavfi -i testsrc -vf "split=4[a][b][c][d];[b]lutyuv=u=128:v=128[x];[c]lutyuv=y=0:v=128[y];[d]lutyuv=y=0:u=128[z];[a][x][y][z]hstack=4"

ffplay -f lavfi -i mandelbrot -vf "format=gbrp,split=4[a][b][c][d],[d]histogram=display_mode=0:level_height=244[dd],[a]waveform=m=1:d=0:r=0:c=7[aa],[b]waveform=m=0:d=0:r=0:c=7[bb],[c][aa]vstack[V],[bb][dd]vstack[V2],[V][V2]hstack"

ffplay -f lavfi -i mandelbrot -vf "format=yuv444p,split=4[a][b][c][d],[a]waveform[aa],[b][aa]vstack[V],[c]waveform=m=0[cc],[d]vectorscope=color4[dd],[cc][dd]vstack[V2],[V][V2]hstack"

Split the waveform filter to show broadcast range of the waveform (y values between 16 and 235) in green and out of broadcast range in red.

ffplay -i  -vf "split[a][b];[a]format=gray,waveform,split[c][d];[b]pad=iw:ih+256[padded];[c]geq=g=1:b=1[red];[d]geq=r=1:b=1,crop=in_w:220:0:16[mid];[red][mid]overlay=0:16[wave];[padded][wave]overlay=0:H-h"

Split the waveform filter to show broadcast range of the waveform (y values between 16 and 235) in green and out of broadcast range in red and also use envelope.

ffplay ~/matrixbench_mpeg2.mpg -vf "split[a][b];[a]waveform=e=3,split=3[c][d][e];[e]crop=in_w:20:0:235,lutyuv=v=180[low];[c]crop=in_w:16:0:0,lutyuv=y=val:v=180[high];[d]crop=in_w:220:0:16,lutyuv=v=110[mid] ; [b][high][mid][low]vstack=4"

 ffmpeg -i video.ext -i audio.ext -c:v copy -c:a copy output.ext

Making some random "musical" keys:

% cat expr
# floor(t): 0 0 0 0 0 ... 1 1 1 1 1 ... 2 2 2 2 2
#  =&gt; set a random key when floor(t) changes

# the next value to compare floor(t) with

# mod(t,1) makes t always in the range [0;1) for each key

# 0.6*... + 0.4*... for "echo" effect
# exp() to mitigate the sound according to the time

And to test it:

ffplay -f lavfi -i "aevalsrc=$(grep -v '^#' expr|tr -d '\n'|sed 's/\([,;]\)/\\\1/g')"

Given the audio file april.flac:

ffplay -f lavfi 'amovie=april.flac,asplit=3[out1][a][b]; [a]showwaves=s=640x240[waves]; [b]showspectrum=s=640x240[spectrum]; [waves][spectrum] vstack[out0]'

FFplay with showwaves and showspectrum

Given the multichannel audio file tearsofsteel-surround.flac:

ffplay -f lavfi 'amovie=tearsofsteel-surround.flac,asplit=2[out1][a]; [a]showspectrum=color=channel:scale=cbrt:orientation=vertical:overlap=1:s=2048x1024[out0]'

Now with different colors and scaling:

ffplay -f lavfi 'amovie=tearsofsteel-surround.flac,asplit=2[out1][a]; [a]showspectrum=color=fire:scale=log:orientation=vertical:overlap=1:s=1024x1024[out0]'

Given the audio file input.flac:

ffplay -f lavfi 'amovie=input.flac,asplit=2[out1][a],[a]avectorscope=m=polar:s=800x400[out0]'

Given the audio file input.flac:

ffplay -f lavfi 'amovie=input.flac,asplit=2[out1][a],[a]showcqt[out0]'

Given the audio file input.flac:

ffmpeg -i input.flac -lavfi showspectrumpic=s=hd720 out.jpg

Linux 2018-07-08T18:51:00+02:00 2018-07-08T18:51:00+02:00

TIPS linux

Some usefull tips.


Update key Kali linux

--> Fix error :

The repository ' kali-rolling InRelease' is not signed.

wget -q -O - | apt-key add

      apt-get install python3-pip 

  /usr/sbin/logrotate /etc/logrotate.conf


$ clang-format file > formattedfile


$ clang-format -i file
$ apt-get install clang-format

cat myfile

#include <iostream>
  using namespace std;
    int main() {
         cout << "Oh";
      cout << "clang format rulez!";       

$ clang-format -i myfile

cat myfile

#include <iostream>
using namespace std;
int main() {
  cout << "Oh";
  cout << "clang format rulez!";

Download the "NppAutoIndent" plugin. In Notepad++:

Plugins → Plugin manager → Available → NppAutoIndent

The "NppAutoIndent" plugin has 'smart' indentation for C-style languages, such as C/C++, PHP, and Java. It's the first release, so don't expect it to be flawless, and of course it might not be completely to your preferences. There is NO support for HTML/XML and such, maybe later, tag matching is much more difficult to implement. To use it, select your code and:

TextFX → TextFX Edit → Reindent C++ code

If you cannot see TextFX in your menu, you can install its plugin from SourceForge.

Here are most of plugins to format your code.

JStool (JSmin):
UniversalIndentGUI ( Enable text auto update' in plugin manager-> UniversalIndentGUI
Shortkey = CTRL+ALT+SHIFT+J )

TextFX : ( Shortkey = CTRL+ALT+SHIFT+B or TextFX > TextFX Html Tidy > Tidy: reindent XML) TextFX has the benefit of wrapping long lines, which XML Tools does not do, but doesn't indent those new lines correctly.

XML Tools : (customized plugin for XML; Shortkey = CTRL+ALT+SHIFT+B or XML Tools > Pretty print [Text indent])
XML Tools complements TextFX by indenting the newly wrapped lines nicely.

Sometimes Netdata are broken, try this command to fix, maybe help ;)

bash -x <(curl -Ss --non-interactive --dont-wait netdata
rcconf 2018-05-08T11:23:00+02:00 2018-05-08T11:23:00+02:00

Vérification des services au démarrage Debian avec rcconf avec interface graphique en console.

installation rcconf

$ apt install rcconf

On lance rcconf:

$ rcconf

Désavantage: rcconf ne peut pas gérer les runlevels. Voir sysv-rc-conf pour cela.

Avantage Simplicité d'utilisation , très bien pour de faire une idée des services lancer au démarrage.



sysv-rc-conf va un peu plus loin que rcconf . Il affiche les runlevels de démarrage et d’arrêt, de 0 à 6. Il peut aussi arrêter/démarrer un service à la volée comme le fait la commande service.

Installation :

$ apt install sysv-rc-conf

Lancer le service :

$ sysv-rc-conf

(Possibilité d’activer/désactiver chaque runlevel de chaque service.)

sysv-rc-conf permet également de lister tous les services et leurs runlevels.

$ sysv-rc-conf --list


$ sysv-rc-conf --list
AlancerOboot 0:off      2:on    3:on    4:on    5:on    6:off
acct         0:off      1:off   2:on    3:on    4:on    5:on    6:off
acpid        2:on       3:on    4:on    5:on
apache-htcac 0:off      1:off   2:off   3:off   4:off   5:off   6:off
apache2      0:off      1:off   2:on    3:on    4:on    5:on    6:off
atd          0:off      1:off   2:on    3:on    4:on    5:on    6:off
atop         0:off      1:off   2:on    3:on    4:on    5:on    6:off
atopacct     0:off      1:off   2:on    3:on    4:on    5:on    6:off
auditd       0:off      1:off   2:on    3:on    4:on    5:on    6:off
bandwidthd   0:off      1:off   2:on    3:on    4:on    5:on    6:off
collectd     0:off      1:off   2:on    3:on    4:on    5:on    6:off
collectl     0:off      1:off   2:on    3:on    4:on    5:on    6:off
cron         2:on       3:on    4:on    5:on
dbus         2:on       3:on    4:on    5:on
disable-tran 0:off      1:off   2:on    3:on    4:on    5:on    6:off
fail2ban     0:off      1:off   2:on    3:on    4:on    5:on    6:off
Bash Shortcuts 2018-04-10T17:04:00+02:00 2018-04-10T17:04:00+02:00

Bash Shortcuts

Essential keyboard shortcuts !

Make your fast.

Ctrl + a – go to the start of the command line
Ctrl + e – go to the end of the command line
Ctrl + k – delete from cursor to the end of the command line
Ctrl + u – delete from cursor to the start of the command line
Ctrl + w – delete from cursor to start of word (i.e. delete backwards one word)
Ctrl + y – paste word or text that was cut using one of the deletion shortcuts (such as the one above) after the cursor
Ctrl + xx – move between start of command line and current cursor position (and back again)
Alt + b – move backward one word (or go to start of word the cursor is currently on)
Alt + f – move forward one word (or go to end of word the cursor is currently on)
Alt + d – delete to end of word starting at cursor (whole word if cursor is at the beginning of word)
Alt + c – capitalize to end of word starting at cursor (whole word if cursor is at the beginning of word)
Alt + u – make uppercase from cursor to end of word
Alt + l – make lowercase from cursor to end of word
Alt + t – swap current word with previous
Ctrl + f – move forward one character
Ctrl + b – move backward one character
Ctrl + d – delete character under the cursor
Ctrl + h – delete character before the cursor
Ctrl + t – swap character under cursor with the previous one

Ctrl + r – search the history backwards
Ctrl + g – escape from history searching mode
Ctrl + p – previous command in history (i.e. walk back through the command history)
Ctrl + n – next command in history (i.e. walk forward through the command history)
Alt + . – use the last word of the previous command

Ctrl + l – clear the screen
Ctrl + s – stops the output to the screen (for long running verbose command)
Ctrl + q – allow output to the screen (if previously stopped using command above)
Ctrl + c – terminate the command
Ctrl + z – suspend/stop the command

Bash also has some handy features that use the ! (bang) to allow you to do some funky stuff with bash commands.

!! – run last command
!blah – run the most recent command that starts with ‘blah’ (e.g. !ls)
!blah:p – print out the command that !blah would run (also adds it as the latest command in the command history)
!$ – the last word of the previous command (same as Alt + .)
!$:p – print out the word that !$ would substitute
!* – the previous command except for the last word (e.g. if you type ‘find some_file.txt /‘, then !* would give you ‘find some_file.txt‘)
!*:p – print out what !* would substitute

There is one more handy thing you can do. This involves using the ^^ ‘command’.

If you type a command and run it, you can re-run the same command but substitute a piece of text for another piece of text using ^^.


$ ls -al
total 12
drwxrwxrwx+ 3 Administrator None    0 Jul 21 23:38 .
drwxrwxrwx+ 3 Administrator None    0 Jul 21 23:34 ..
-rwxr-xr-x  1 Administrator None 1150 Jul 21 23:34 .bash_profile
-rwxr-xr-x  1 Administrator None 3116 Jul 21 23:34 .bashrc
drwxr-xr-x+ 4 Administrator None    0 Jul 21 23:39 .gem
-rwxr-xr-x  1 Administrator None 1461 Jul 21 23:34 .inputrc
$ ^-al^-lash
ls -lash
total 12K
   0 drwxrwxrwx+ 3 Administrator None    0 Jul 21 23:38 .
   0 drwxrwxrwx+ 3 Administrator None    0 Jul 21 23:34 ..
4.0K -rwxr-xr-x  1 Administrator None 1.2K Jul 21 23:34 .bash_profile
4.0K -rwxr-xr-x  1 Administrator None 3.1K Jul 21 23:34 .bashrc
   0 drwxr-xr-x+ 4 Administrator None    0 Jul 21 23:39 .gem
4.0K -rwxr-xr-x  1 Administrator None 1.5K Jul 21 23:34 .inputrc

Here, the command was the ^-al^-lash which replaced the –al with –lash in our previous ls command and re-ran the command again.

Goaccess 2018-04-09T18:29:00+02:00 2018-04-09T18:29:00+02:00

What is it?

GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.

It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly.


GoAccess parses the specified web log file and outputs the data to the X terminal. Features include:

  • Completely Real Time All panels and metrics are timed to be updated every 200 ms on the terminal output and every second on the HTML output.

  • No configuration needed You can just run it against your access log file, pick the log format and let GoAccess parse the access log and show you the stats.

  • Track Application Response Time Track the time taken to serve the request. Extremely useful if you want to track pages that are slowing down your site.

  • Nearly All Web Log Formats GoAccess allows any custom log format string. Predefined options include, Apache, Nginx, Amazon S3, Elastic Load Balancing, CloudFront, etc

  • Incremental Log Processing Need data persistence? GoAccess has the ability to process logs incrementally through the on-disk B+Tree database.

  • Only one dependency GoAccess is written in C. To run it, you only need ncurses as a dependency. That's it. It even has its own Web Socket server -

  • Visitors Determine the amount of hits, visitors, bandwidth, and metrics for slowest running requests by the hour, or date.

  • Metrics per Virtual Host Have multiple Virtual Hosts (Server Blocks)? A panel that displays which virtual host is consuming most of the web server resources.

  • Color Scheme Customizable Tailor GoAccess to suit your own color taste/schemes. Either through the terminal, or by simply updating the stylesheet on the HTML output.

  • Support for large datasets GoAccess features an on-disk B+Tree storage for large datasets where it is not possible to fit everything in memory.

  • Docker support GoAccess comes with a default Docker ( that will listen for HTTP connections on port 7890. Although, you can still fully configure it, by using Volume mapping and editing goaccess.conf.

  • and more... visit for more details.

Why GoAccess?

GoAccess was designed to be a fast, terminal-based log analyzer. Its core idea is to quickly analyze and view web server statistics in real time without needing to use your browser (great if you want to do a quick analysis of your access log via SSH, or if you simply love working in the terminal).

While the terminal output is the default output, it has the capability to generate a complete real-time HTML report, as well as a JSON, and CSV report.

You can see it more of a monitor command tool than anything else.


The user can make use of the following keys:

  • ^F1^ or ^h^ Main help,
  • ^F5^ Redraw [main window],
  • ^q^ Quit the program, current window or module,
  • ^o^ or ^ENTER^ Expand selected module,
  • ^[Shift]0-9^ Set selected module to active,
  • ^Up^ arrow Scroll up main dashboard,
  • ^Down^ arrow Scroll down main dashboard,
  • ^j^ Scroll down within expanded module,
  • ^k^ Scroll up within expanded module,
  • ^c^ Set or change scheme color,
  • ^CTRL^ + ^f^ Scroll forward one screen within,
  • active module,
  • ^CTRL^ + ^b^ Scroll backward one screen within,
  • active module,
  • ^TAB^ Iterate modules (forward),
  • ^SHIFT^ + ^TAB^ Iterate modules (backward),
  • ^s^ Sort options for current module,
  • ^/^ Search across all modules,
  • ^n^ Find position of the next occurrence,
  • ^g^ Move to the first item or top of screen,
  • ^G^ Move to the last item or bottom of screen,

Examples can be found by running man goaccess.

Code 2018-04-08T13:30:00+02:00 2018-04-08T13:30:00+02:00


linux-dash 2018-04-07T12:09:00+02:00 2018-04-07T12:09:00+02:00

Linux-dash v2.0

A simple & low-overhead web dashboard for linux systems

Demo  |  Docs


  • Small ----- Under 400KB on disk (with .git removed)!
  • Simple ---- A minimalist, beautiful dashboard
  • Easy ------ Drop-in installation
  • Versatile -- Choose your stack from Node.js, Go, Python, PHP


Step 1

## 1. clone the repo
git clone --depth 1

## 2. go to the cloned directory
cd linux-dash/app/server

OR, if you prefer to download manually:

## 1. Download the .zip
curl -LOk && unzip

## 2. navigate to downloaded & unzipped dir
cd linux-dash-master/app/server

Step 2

See instructions for preferred server linux-dash server (all included):

If Using Node.js

## install dependencies
npm install --production

## start linux-dash (on port 80 by default; may require sudo)
## You may change this with the `LINUX_DASH_SERVER_PORT` environment variable (eg. `LINUX_DASH_SERVER_PORT=8080 node server`)
## or provide a --port flag to the command below
node index.js

If Using Go

## start the server (on port 80 by default; may require sudo)
go run index.go

To build a binary, run go build && ./server -h. See @tehbilly's notes here for binary usage options

If Using Python

# Start the server (on port 80 by default; may require sudo).

If Using PHP

  1. Make sure you have the exec, shell_exec, and escapeshellarg functions enabled
  2. Point your web server to app/ directory under linux-dash
  3. Restart your web server (Apache, nginx, etc.)


For general help, please use the Gitter chat room.


It is strongly recommended that all linux-dash installations be protected via a security measure of your choice.

Linux Dash does not provide any security or authentication features.

munin 2018-04-07T11:56:00+02:00 2018-04-07T11:56:00+02:00


Installing Munin

You will need to install "munin-master" on the machine that will collect data from all nodes, and graph the results. When starting with Munin, it should be enough to install the Munin master on one server.

The munin master runs :ref:munin-httpd which is a basic webserver which provides the munin web interface on port 4948/tcp.

Install "munin-node" on the machines that shall be monitored by Munin. Install "munin-client" on the machines that have web poge monitoring.

Source or packages?

With open source software, you can choose to install binary packages or install from source-code.

We `strongly` recommend a packaged install, as the source distribution isn't as tested as the packaged one. The current state of the packages is so satisfactory, that even the developers use them instead.

Installing Munin on most relevant operating systems can usually be done with the systems package manager, typical examples being:

Installing Munin from a package


Munin is distributed with both Debian and Ubuntu.

In order to get Munin up and running type

 $ sudo apt-get install munin-node

on all nodes, and

 $ sudo apt-get install munin

on the master.

Please note that this might not be the latest version of Munin. On Debian you have the option of enabling "backports", which may give access to later versions of Munin.

Link Project:

monitorix 2018-04-06T15:24:00+02:00 2018-04-06T15:24:00+02:00

Monitorix web tool

Installation Monitorix

Via the repository

# apt-get update
# apt-get install monitorix


downloading first the package and taking care for dependencies, and finally installing it.

# apt-get update
# apt-get install rrdtool perl libwww-perl libmailtools-perl libmime-lite-perl \
librrds-perl libdbi-perl libxml-simple-perl libhttp-server-simple-perl \ 
libconfig-general-perl libio-socket-ssl-perl
# dpkg -i monitorix*.deb
# apt-get -f install

Configuring Monitorix

Monitorix ships with a default configuration file which works out-of-the-box. Moreover, the service is automatically started on package installation.

To fine-tune your installation, take a look at the /etc/monitorix/monitorix.conf file (and optionally the documentation) to adjust some things (like network interfaces, filesystems, disks, etc.).

The Debian package also comes with an extra configuration file in /etc/monitorix/conf.d/00-debian.conf that includes some options specially adapted for Debian systems.
This file will be loaded right after the main configuration file, hence some options in the main configuration will be overwritten by this extra file.

When you are done, restart Monitorix to let your changes take effect:

service monitorix restart

Link official :


netdata 2018-04-06T15:22:00+02:00 2018-04-06T15:22:00+02:00

Build Status Codacy Badge Code Climate License: GPL v3+

New to netdata? Here is a live demo:

netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards.

netdata is fast and efficient, designed to permanently run on all systems (physical & virtual servers, containers, IoT devices), without disrupting their core function.

netdata runs on Linux, FreeBSD, and MacOS.

$  apt-get install zlib1g-dev uuid-dev libmnl-dev gcc make git autoconf autoconf-archive autogen automake pkg-config curl jq nodejs


For all Linux systems, you can use this one liner to install the git version of netdata:

# basic netdata installation
$ bash <(curl -Ss

# install required packages for all netdata plugins
$ bash <(curl -Ss all

The above:

$ bash <(curl -Ss all --dont-wait --dont-start-it

You can install a pre-compiled static binary of netdata for any Intel/AMD 64bit Linux system (even those that don't have a package manager, like CoreOS, CirrOS, busybox systems, etc). You can also use these packages on systems with broken or unsupported package managers.

To install the latest version use this:

$ bash <(curl -Ss

For automated installs, append a space + --dont-wait to the command line. You can also append --dont-start-it to prevent the installer from starting netdata. Example:

$ bash <(curl -Ss --dont-wait --dont-start-it

If your shell fails to handle the above one liner, do this:

# download the script with curl
$ curl >/tmp/

# or, download the script with wget
$ wget -O /tmp/

# run the downloaded script (any sh is fine, no need for bash)
$ sh /tmp/

The static builds install netdata at /opt/netdata.

The static binary files are kept in this repo:

Download any of the .run files, and run it. These files are self-extracting shell scripts built with makeself.

The target system does not need to have bash installed.

The same files can be used for updates too.

$  cd /opt
$  git clone netdata --depth=1
$  cd netdata
$  ./


netdata collects several thousands of metrics per device. All these metrics are collected and visualized in real-time.

Almost all metrics are auto-detected, without any configuration.

This is a list of what it currently monitors:

  • CPU
    usage, interrupts, softirqs, frequency, total and per core, CPU states

  • Memory
    RAM, swap and kernel memory usage, KSM (Kernel Samepage Merging), NUMA

  • Disks
    per disk: I/O, operations, backlog, utilization, space, software RAID (md)


  • Network interfaces
    per interface: bandwidth, packets, errors, drops


  • IPv4 networking
    bandwidth, packets, errors, fragments, tcp: connections, packets, errors, handshake, udp: packets, errors, broadcast: bandwidth, packets, multicast: bandwidth, packets

  • IPv6 networking
    bandwidth, packets, errors, fragments, ECT, udp: packets, errors, udplite: packets, errors, broadcast: bandwidth, multicast: bandwidth, packets, icmp: messages, errors, echos, router, neighbor, MLDv2, group membership, break down by type

  • Linux DDoS protection
    SYNPROXY metrics

  • fping latencies
    for any number of hosts, showing latency, packets and packet loss


  • Processes
    running, blocked, forks, active

  • Entropy
    random numbers pool, using in cryptography


  • Applications
    by grouping the process tree and reporting CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets - per group


  • Users and User Groups resource usage
    by summarizing the process tree per user and group, reporting: CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets

  • Apache and lighttpd web servers
    mod-status (v2.2, v2.4) and cache log statistics, for multiple servers

  • statsd
    netdata is a fully featured statsd server

  • ceph
    OSD usage, Pool usage, number of objects, etc.

And you can extend it, by writing plugins that collect data from any source, using any computer language.

Check the netdata wiki.

netdata is GPLv3+.

It re-distributes other open-source tools and libraries. Please check the third party licenses.

whatisport 2018-04-06T14:45:00+02:00 2018-04-06T14:45:00+02:00


Connaitre les ports et services associés dans son terminal.

Les données utilisées proviennent du site pour la récupération de la liste des ports.

Installation avec python-pip de whatportis:

$ pip install whatportis

Pour connaître le numéro de port d'un service :

$  whatportis 443
| Name  | Port | Protocol | Description                |
| https | 443  |   tcp    | http protocol over TLS/SSL |
| https | 443  |   udp    | http protocol over TLS/SSL |
| https | 443  |   sctp   | HTTPS                      |

Exemple avec --like

 $ whatportis munin --like
| Name  | Port | Protocol | Description              |
| munin | 4949 |   tcp    | Munin Graphing Framework |
| munin | 4949 |   udp    | Munin Graphing Framework |
eZ Server Monitor 2018-04-06T11:58:00+02:00 2018-04-06T11:58:00+02:00

eZ Server Monitor `Web

Version Web, eZ Server Monitor est un script PHP:

eZ Server Monitor `sh

Version Bash (eSM`sh), pour terminal Unix.

Link project:



SH -

glances 2018-04-06T11:40:00+02:00 2018-04-06T11:40:00+02:00


Monitoring #Bash

Glances Auto Install script

To install the latest Glances production ready version, just enter the following command line:

$ curl -L | /bin/bash


$ wget -O- | /bin/bash

Note: Only supported on some GNU/Linux distributions.

Glances is on PyPI. By using PyPI, you are sure to have the latest stable version.

To install, simply use pip:

$ pip install glances

Others methods ? Read the official installation documentation.

htop 2018-04-06T11:03:00+02:00 2018-04-06T11:03:00+02:00


htop est un moniteur système pour les systèmes d’exploitation type Unix très similaire à top, qui fonctionne comme lui en mode Terminal, mais qui dispose d'un environnement en mode texte plus convivial (et coloré) que ce dernier.

Il est programmé en C à l'aide de la bibliothèque ncurses.

Installtion via apt-get :

$ apt-get install htop


  • Changer le style d’affichage LED Bar Text ou Graph , appuyer sur la barre d’espace clavier une fois sur l’élément choisi. !

  • Require: librairie Ncurses 6

    $ apt-get  install libncurses5-dev libncursesw5-dev 
gtop 2018-04-06T10:59:00+02:00 2018-04-06T10:59:00+02:00


GTOP System monitoring dashboard for terminal.


  • Linux / OSX / Windows (partial support)
  • Node.js >= v4


$ npm install gtop -g


You can sort the process table by pressing

  • p: Process Id
  • c: CPU usage
  • m: Memory usage


If you see question marks or other different characters, try to run it with these environment variables:

$ LANG=en_US.utf8 TERM=xterm-256color gtop


Released under the MIT license.

Link Project:

Monitoring 2018-04-06T10:47:00+02:00 2018-04-06T10:47:00+02:00

Tools Monitoring

s-tui 2018-04-06T10:43:00+02:00 2018-04-06T10:43:00+02:00


s-tui is a terminal UI for monitoring your computer. s-tui allows to monitor CPU temperature, frequency, power and utilization in a graphical way from the terminal.

The Stress Terminal UI: s-tui



What it does

  • Monitoring your CPU temperature/utilization/frequency/power
  • Shows performance dips caused by thermal throttling
  • Requires minimal resources
  • Requires no X-server
  • Built in options for stressing the CPU (stress/stress-ng)




sudo s-tui

Simple installation

pip (x86 + ARM)

The most up to date version of s-tui is available with pip

sudo pip install s-tui

Or if you cannot use sudo:

pip install s-tui --user

If you are installing s-tui on a Raspberry-Pi you might need to install python-dev first


********s-tui manual********
usage: [-h] [-d] [-c] [-t] [-j] [-nm] [-v] [-ct CUSTOM_TEMP]

TUI interface:

The side bar houses the controls for the displayed grahps.
At the bottom of the side bar, more information is presented in text form.

* Use the arrow keys or 'hjkl' to navigate the side bar
* Toggle between stressed and regular operation using the radio buttons in 'Modes'.
* If you wish to alternate stress defaults, you can do it in 'Stress options'
* Select a different temperature sensors from the 'Temp Sensors' menu
* Change time between updates using the 'Refresh' field
* Use the 'Reset' button to reset graphs and statistics
* Toggle displayed graphs by selecting the [X] check box
* If a sensor is not available on your system, N/A is presented
* If your system supports it, you can use the utf8 button to get a smoother graph
* Press 'q' or the 'Quit' button to quit

* Run `s-tui --help` to get this message and additional cli options

optional arguments:
  -h, --help            show this help message and exit
  -d, --debug           Output debug log to _s-tui.log
  -c, --csv             Save stats to csv file
  -t, --terminal        Display a single line of stats without tui
  -j, --json            Display a single line of stats in JSON format
  -nm, --no-mouse       Disable Mouse for TTY systems
  -v, --version         Display version
  -ct CUSTOM_TEMP, --custom_temp CUSTOM_TEMP

                        Custom temperature sensors.
                        The format is: <sensors>,<number>
                        As it appears in 'sensors'
                        > sensors
                        temp1: +47.0C
                        temp2: +35.0C
                        temp3: +37.0C

                        use: -ct it8792,0 for temp 1

  -cf CUSTOM_FAN, --custom_fan CUSTOM_FAN
                        Similar to custom temp
                        Adapter: ISA adapter
                        fan1:        1975 RPM

                        use: -cf thinkpad,0 for fan1


s-tui is a great tool for monitoring. If you would like to stress your computer, install stress. Stress options will then show up in s-tui (optional)

sudo apt-get install stress


s-tui is a self-contained application which can run out-of-the-box and doesn't need config files to drive its core features. However, additional features like running scripts when a certain threshold has been exceeded (e.g. CPU temperature) does necessitate creating a config directory. This directory will be made in ~/.config/s-tui by default.

Adding threshold scripts

s-tui gives you the ability to run arbitrary shell scripts when a certain threshold is surpassed, like your CPU temperature. You can define this custom behaviour by adding a shell file to the directory ~/.config/s-tui/hooks.d with one of the following names, depending on what threshold you're interesting in reacting to:

  • triggered when the CPU temperature threshold is exceeded

If s-tui finds a script in the hooks directory with the name of a source it supports, it will run that script every 30 seconds as long as the current value of the source remains above the threshold.

Note that at the moment only CPU temperature threshold hooks are supported.

More installation methods


The latest stable version of s-tui is available via pip. To install pip on Ubuntu run:
sudo apt-get install gcc python-dev python-pip
Once pip is installed, install s-tui from pip:
(sudo) pip install s-tui

A deprecated ppa is available (tested on Ubuntu 16.04)

sudo add-apt-repository ppa:amanusk/python-s-tui
sudo apt-get update
sudo apt-get install python-s-tui


AUR packages of s-tui are available

s-tui is the latest stable release version. Maintined by @DonOregano
s-tui-git follows the master branch. maintained by @MauroMombelli
install with
(sudo) yaourt -S s-tui

Run source code

Running s-tui from source

git clone

Install dependencies, these need to be installed to run

(sudo) pip install urwid
(sudo) pip install psutil

Install stress (optional)

sudo apt-get install stress

Run the .py file

(sudo) python -m s_tui.s_tui

OPTIONAL integration of FIRESTARTER (via submodule, does not work on all systems)

FIRESTARTER is a great tool to stress your system to the extreme. If you would like, you can integrate FIRESTARTER submodule into s-tui. To build FIRESTARTER

git submodule init
git submodule update

Once you have completed these steps, you can either:

  • Install FIRESTARTER to make it accessible to s-tui, e.g make a soft-link to FIRESTARTER in /usr/local/bin.
  • Run s-tui from the main project directory with python -m s_tui.s_tui
    An option to run FIRESTARTER will then be available in s-tui


s-tui uses psutil to probe some of your hardware information. If your hardware is not supported, you might not see all the information.

  • On Intel machines:
    Running s-tui as root gives access to the maximum Turbo Boost frequency available to your CPU when stressing all cores. Running without root will display the Turbo Boost available on a single core.

  • Power read is supported on Intel Core CPUs of the second generation and newer (Sandy Bridge)

  • s-tui tested to run on Raspberry-Pi 3,2,1


Q: How is this different from htop?
A: s-tui is not a processes monitor like htop. The purpose is to monitor your CPU statistics and have an option to test the system under heavy load. (Think AIDA64 stress test, not task manager).

Q: What features require sudo permissions?
A: Top Turbo frequency varies depending on how many cores are utilized. Sudo permissions are required in order to accurately read the top frequency when all the cores are utilized.

Q: I don't have a temperature graph
A: Systems have different sensors to read CPU temperature. If you do not see a temperature read, your system might not be supported (yet). You can try manually setting the sensor with the cli interface (see --help), or selecting a sensor from the 'Temp Sensors' menu

Q: I have a temperature graph, but it is wrong.
A: A default sensor is selected for temperature reads. On some systems this sensor might indicate the wrong temperature. You can manually select a sensor from the 'Temp Sensors' menu or using the cli interface (see --help)

Q: I am using the TTY with no X server and s-tui crashes on start
A: By default, s-tui is handles mouse inputs. This causes some systems to crash. Try running s-tui --no-mouse

Bash help command 2018-03-27T10:18:00+02:00 2018-03-27T10:18:00+02:00


Displays all environment variables. If you want to get details of a specific variable, use echo $VARIABLE_NAME.



$ export

$ echo $AWS_HOME

whatis shows description for user commands, system calls, library functions, and others in manual pages

whatis something


$ whatis bash
bash (1)             - GNU Bourne-Again SHell

whereis searches for executables, source files, and manual pages using a database built by system automatically.

whereis name


$ whereis php

which searches for executables in the directories specified by the environment variable PATH. This command will print the full path of the executable(s).

which program_name 


$ which php

Clears content on window.

It can be used for the following purposes under UNIX or Linux.

  • Display text files on screen
  • Copy text files
  • Combine text files
  • Create new text files
    cat filename
    cat file1 file2 
    cat file1 file2 > newcombinedfile
    cat < file1 > file2 #copy file1 to file2

The chmod command stands for "change mode" and allows you to change the read, write, and execute permissions on your files and folders. For more information on this command check this link.

chmod -options filename

The chown command stands for "change owner", and allows you to change the owner of a given file or folder, which can be a user and a group. Basic usage is simple forward first comes the user (owner), and then the group, delimited by a colon.

chown -options user:group filename

Copies a file from one location to other.

cp filename1 filename2

Where filename1 is the source path to the file and filename2 is the destination path to the file.

Compares files, and lists their differences.

diff filename1 filename2

Determine file type.

file filename


$ file index.html
 index.html: HTML document, ASCII text

Find files in directory

find directory options pattern


$ find . -name
$ find /home/user1 -name '*.png'

Un-compresses files compressed by gzip.

gunzip filename

Lets you look at gzipped file without actually having to gunzip it.

gzcat filename

Compresses files.

gzip filename

Outputs the first 10 lines of file

head filename

Check out the printer queue.



$ lpq
Rank    Owner   Job     File(s)                         Total Size
active  adnanad 59      demo                            399360 bytes
1st     adnanad 60      (stdin)                         0 bytes

Print the file.

lpr filename

Remove something from the printer queue.

lprm jobnumber

Lists your files. ls has many options: -l lists files in 'long format', which contains the exact size of the file, who owns the file, who has the right to look at it, and when it was last modified. -a lists all files, including hidden files. For more information on this command check this link.

ls option


$ ls -la
rwxr-xr-x   33 adnan  staff    1122 Mar 27 18:44 .
drwxrwxrwx  60 adnan  staff    2040 Mar 21 15:06 ..
-rw-r--r--@  1 adnan  staff   14340 Mar 23 15:05 .DS_Store
-rw-r--r--   1 adnan  staff     157 Mar 25 18:08 .bumpversion.cfg
-rw-r--r--   1 adnan  staff    6515 Mar 25 18:08 .config.ini
-rw-r--r--   1 adnan  staff    5805 Mar 27 18:44 .config.override.ini
drwxr-xr-x  17 adnan  staff     578 Mar 27 23:36 .git
-rwxr-xr-x   1 adnan  staff    2702 Mar 25 18:08 .gitignore

Shows the first part of a file (move with space and type q to quit).

more filename

Moves a file from one location to other.

mv filename1 filename2

Where filename1 is the source path to the file and filename2 is the destination path to the file.

Also it can be used for rename a file.

mv old_name new_name

Removes a file. Using this command on a directory gives you an error. rm: directory: is a directory To remove a directory you have to pass -r which will remove the content of the directory recursively. Optionally you can use -f flag to force the deletion i.e. without any confirmations etc.

rm filename

Outputs the last 10 lines of file. Use -f to output appended data as the file grows.

tail filename

Updates access and modification time stamps of your file. If it doesn't exists, it'll be created.

touch filename


$ touch

awk is the most useful command for handling text files. It operates on an entire file line by line. By default it uses whitespace to separate the fields. The most common syntax for awk command is

awk '/search_pattern/ { action_to_take_if_pattern_matches; }' file_to_parse

Lets take following file /etc/passwd. Here's the sample data that this file contains:


So now lets get only username from this file. Where -F specifies that on which base we are going to separate the fields. In our case it's :. { print $1 } means print out the first matching field.

awk -F':' '{ print $1 }' /etc/passwd

After running the above command you will get following output.


For more detail on how to use awk, check following link.

Remove sections from each line of files


red riding hood went to the park to play

show me columns 2 , 7 , and 9 with a space as a separator

cut -d " " -f2,7,9 example.txt
riding park play

Display a line of text

display "Hello World"

echo Hello World
Hello World

display "Hello World" with newlines between words

echo -ne "Hello\nWorld\n"

Print lines matching a pattern - Extended Expression (alias for: 'grep -E')


Lorem ipsum
dolor sit amet, 
sadipscing elitr,
sed diam nonumy
eirmod tempor
invidunt ut labore
et dolore magna
aliquyam erat, sed
diam voluptua. At
vero eos et
accusam et justo
duo dolores et ea
rebum. Stet clita
kasd gubergren,
no sea takimata
sanctus est Lorem
ipsum dolor sit

display lines that have either "Lorem" or "dolor" in them.

egrep '(Lorem|dolor)' example.txt
grep -E '(Lorem|dolor)' example.txt
Lorem ipsum
dolor sit amet,
et dolore magna
duo dolores et ea
sanctus est Lorem
ipsum dolor sit

Print lines matching a pattern - FIXED pattern matching (alias for: 'grep -F')


Lorem ipsum
dolor sit amet,
sadipscing elitr,
sed diam nonumy
eirmod tempor
foo (Lorem|dolor) 
invidunt ut labore
et dolore magna
aliquyam erat, sed
diam voluptua. At
vero eos et
accusam et justo
duo dolores et ea
rebum. Stet clita
kasd gubergren,
no sea takimata
sanctus est Lorem
ipsum dolor sit

Find the exact string '(Lorem|dolor)' in example.txt

fgrep '(Lorem|dolor)' example.txt
grep -F '(Lorem|dolor)' example.txt
foo (Lorem|dolor) 

Simple optimal text formatter

example: example.txt (1 line)

Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.

output the lines of example.txt to 20 character width

cat example.txt | fmt -w 20
Lorem ipsum
dolor sit amet,
sadipscing elitr,
sed diam nonumy
eirmod tempor
invidunt ut labore
et dolore magna
aliquyam erat, sed
diam voluptua. At
vero eos et
accusam et justo
duo dolores et ea
rebum. Stet clita
kasd gubergren,
no sea takimata
sanctus est Lorem
ipsum dolor sit

Looks for text inside files. You can use grep to search for lines of text that match one or many regular expressions, and outputs only the matching lines.

grep pattern filename


$ grep admin /etc/passwd
_kadmin_admin:*:218:-2:Kerberos Admin Service:/var/empty:/usr/bin/false
_kadmin_changepw:*:219:-2:Kerberos Change Password Service:/var/empty:/usr/bin/false
_krb_kadmin:*:231:-2:Open Directory Kerberos Admin Service:/var/empty:/usr/bin/false

You can also force grep to ignore word case by using -i option. -r can be used to search all files under the specified directory, for example:

$ grep -r admin /etc/

And -w to search for words only. For more detail on grep, check following link.

Number lines of files


Lorem ipsum
dolor sit amet,
sadipscing elitr,
sed diam nonumy
eirmod tempor
invidunt ut labore
et dolore magna
aliquyam erat, sed
diam voluptua. At
vero eos et
accusam et justo
duo dolores et ea
rebum. Stet clita
kasd gubergren,
no sea takimata
sanctus est Lorem
ipsum dolor sit

show example.txt with line numbers

nl -s". " example.txt 
     1. Lorem ipsum
     2. dolor sit amet,
     3. consetetur
     4. sadipscing elitr,
     5. sed diam nonumy
     6. eirmod tempor
     7. invidunt ut labore
     8. et dolore magna
     9. aliquyam erat, sed
    10. diam voluptua. At
    11. vero eos et
    12. accusam et justo
    13. duo dolores et ea
    14. rebum. Stet clita
    15. kasd gubergren,
    16. no sea takimata
    17. sanctus est Lorem
    18. ipsum dolor sit
    19. amet.

Stream editor for filtering and transforming text


Hello This is a Test 1 2 3 4

replace all spaces with hyphens

sed 's/ /-/g' example.txt

replace all digits with "d"

sed 's/[0-9]/d/g' example.txt
Hello This is a Test d d d d

Sort lines of text files



sort example.txt

sort example.txt

randomize a sorted example.txt

sort example.txt | sort -R

Translate or delete characters


Hello World Foo Bar Baz!

take all lower case letters and make them upper case

cat example.txt | tr 'a-z' 'A-Z' 

take all spaces and make them into newlines

cat example.txt | tr ' ' '\n'

Report or omit repeated lines



show only unique lines of example.txt (first you need to sort it, otherwise it won't see the overlap)

sort example.txt | uniq

show the unique items for each line, and tell me how many instances it found

sort example.txt | uniq -c
    3 a
    2 b
    2 c
    1 d

Tells you how many lines, words and characters there are in a file.

wc filename


$ wc demo.txt
7459   15915  398400 demo.txt

Where 7459 is lines, 15915 is words and 398400 is characters.

Moves you from one directory to other. Running this

$ cd

moves you to home directory. This command accepts an optional dirname, which moves you to that directory.

cd dirname

Makes a new directory.

mkdir dirname

Tells you which directory you currently are in.


Lists stopped or background jobs; resume a stopped job in the background.

Shows the month's calendar.

Shows the current date and time.

Shows disk usage.

Gets DNS information for domain.

dig domain

Shows the disk usage of files or directories. For more information on this command check this link

du [option] [filename|directory]


  • -h (human readable) Displays output it in kilobytes (K), megabytes (M) and gigabytes (G).
  • -s (supress or summarize) Outputs total disk space of a directory and supresses reports for subdirectories.


du -sh pictures
1.4M pictures

Brings the most recent job in the foreground.

Displays information about user.

finger username

Lists the jobs running in the background, giving the job number.

Lists your last logins of specified user.

last yourUsername

Shows the manual for specified command.

man command

Allows the current logged user to change their password.

Pings host and outputs results.

ping host

Lists your processes.

ps -u yourusername

Use the flags ef. e for every process and f for full listing.

ps -ef

Shows what your disk quota is.

quota -v

Transfer files between a local host and a remote host or between two remote hosts.

copy from local host to remote host

scp source_file user@host:directory/target_file

copy from remote host to local host

scp user@host:directory/source_file target_file
scp -r user@host:directory/source_folder target_folder

This command also accepts an option -P that can be used to connect to specific port.

scp -P port user@host:directory/source_file target_file

ssh (SSH client) is a program for logging into and executing commands on a remote machine.

ssh user@host

This command also accepts an option -p that can be used to connect to specific port.

ssh -p port user@host

Displays your currently active processes.

Shows kernel information.

uname -a

Shows current uptime.

Displays who is online.

Downloads file.

wget file

Return current logged in username.

Gets whois information for domain.

whois domain

Kills (ends) the processes with the ID you gave.

kill PID

Kill all processes with the name.

killall processname

The & symbol instructs the command to run as a background process in a subshell.

command &

nohup stands for "No Hang Up". This allows to run command/process or shell script that can continue running in the background after you log out from a shell.

nohup command

Combine it with & to create background processes

nohup command &

Open bash_profile by running following command nano ~/.bash_profile

alias dockerlogin='ssh www-data@adnan.local -p2222' # add your alias in .bash_profile

nano ~/.bashrc

export hotellogs="/workspace/hotel-api/storage/logs"

source ~/.bashrc
cd $hotellogs

Make your bash scripts more robust by reliably performing cleanup.

function finish {
  # your cleanup here. e.g. kill any forked processes
  jobs -p | xargs kill
trap finish EXIT

When you do export FOO = BAR, your variable is only exported in this current shell and all its children, to persist in the future you can simply append in your ~/.bash_profile file the command to export your variable

echo export FOO=BAR >> ~/.bash_profile

You can easily access your scripts by creating a bin folder in your home with mkdir ~/bin, now all the scripts you put in this folder you can access in any directory.

If you can not access, try append the code below in your ~/.bash_profile file and after do source ~/.bash_profile.

    # set PATH so it includes user's private bin if it exists
    if [ -d "$HOME/bin" ] ; then

You can easily debug the bash script by passing different options to bash command. For example -n will not run commands and check for syntax errors only. -v echo commands before running them. -x echo commands after command-line processing.

bash -n scriptname
bash -v scriptname
bash -x scriptname

License: CC BY 4.0

Htaccess 2018-03-27T10:10:00+02:00 2018-03-27T10:10:00+02:00

NOTE: .htaccess files are for people that do not have rights to edit the main server configuration file. They are intrinsically slower and more complicated than using the main config. Please see the howto in the httpd documentation for further details.

Disclaimer: While dropping the snippet into an .htaccess file is most of the time sufficient, there are cases when certain modifications might be required. Use at your own risk.

IMPORTANT: Apache 2.4 introduces a few breaking changes, most notably in access control configuration. For more information, check the upgrading document as well as this issue.

What we are doing here is mostly collecting useful snippets from all over the interwebs (for example, a good chunk is from Apache Server Configs) into one place. While we’ve been trying to credit where due, things might be missing. If you believe anything here is your work and credits should be given, let us know, or just send a PR.

Note: It is assumed that you have mod_rewrite installed and enabled.

RewriteEngine on
RewriteCond %{HTTP_HOST} ^example\.com [NC]
RewriteRule ^(.*)$$1 [L,R=301,NC]

RewriteCond %{HTTP_HOST} !^$
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteCond %{HTTPS}s ^on(s)|
RewriteRule ^ http%1://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This works for any domain. Source

It’s still open for debate whether www or non-www is the way to go, so if you happen to be a fan of bare domains, here you go:

RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]
RewriteRule ^(.*)$$1 [L,R=301]

RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.
RewriteCond %{HTTPS}s ^on(s)|off
RewriteCond http%1://%{HTTP_HOST} ^(https?://)(www\.)?(.+)$
RewriteRule ^ %1%3%{REQUEST_URI} [R=301,L]

RewriteEngine on
RewriteCond %{HTTPS} !on
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

# Note: It’s also recommended to enable HTTP Strict Transport Security (HSTS)
# on your HTTPS website to help prevent man-in-the-middle attacks.
# See
<IfModule mod_headers.c>
    # Remove "includeSubDomains" if you don't want to enforce HSTS on all subdomains
    Header always set Strict-Transport-Security "max-age=31536000;includeSubDomains"

Useful if you have a proxy in front of your server performing TLS termination.

RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

RewriteCond %{REQUEST_URI} /+[^\.]+$
RewriteRule ^(.+[^/])$ %{REQUEST_URI}/ [R=301,L]

This snippet will redirect paths ending in slashes to their non-slash-terminated counterparts (except for actual directories), e.g. to This is important for SEO, since it’s recommended to have a canonical URL for every page.

RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} (.+)/$
RewriteRule ^ %1 [R=301,L]


Redirect 301 /oldpage.html
Redirect 301 /oldpage2.html


RedirectMatch 301 /subdirectory(.*)$1
RedirectMatch 301 ^/(.*).htm$ /$1.html
RedirectMatch 301 ^/200([0-9])/([^01])(.*)$ /$2$3
RedirectMatch 301 ^/category/(.*)$ /$1
RedirectMatch 301 ^/(.*)/htaccesselite-ultimate-htaccess-article.html(.*) /htaccess/htaccess.html
RedirectMatch 301 ^/(.*).html/1/(.*) /$1.html$2
RedirectMatch 301 ^/manual/(.*)$$1
RedirectMatch 301 ^/dreamweaver/(.*)$ /tools/$1
RedirectMatch 301 ^/z/(.*)$$1


RewriteEngine On
RewriteRule ^source-directory/(.*) /target-directory/$1 [R=301,L]

FallbackResource /index.fcgi

This example has an index.fcgi file in some directory, and any requests within that directory that fail to resolve a filename/directory will be sent to the index.fcgi script. It’s good if you want to be handled by (which also supports requests to while maintaining and the like. Get access to the original path from the PATH_INFO environment variable, as exposed to your scripting environment.

RewriteEngine On
RewriteRule ^$ index.fcgi/ [QSA,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.fcgi/$1 [QSA,L]

This is a less efficient version of the FallbackResource directive (because using mod_rewrite is more complex than just handling the FallbackResource directive), but it’s also more flexible.

Redirect 301 /

This way does it with links intact. That is will become This is extremely helpful when you are just “moving” a site to a new domain. Source

This snippet lets you use “clean” URLs -- those without a PHP extension, e.g. instead of

RewriteEngine On
RewriteCond %{SCRIPT_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.php [NC,L]


## Apache 2.2
Deny from all

## Apache 2.4
# Require all denied

But wait, this will lock you out from your content as well! Thus introducing...

## Apache 2.2
Order deny,allow
Deny from all
Allow from

## Apache 2.4
# Require all denied
# Require ip is your IP. If you replace the last three digits with 0/12 for example, this will specify a range of IPs within the same network, thus saving you the trouble to list all allowed IPs separately. Source

Now of course there's a reversed version:

## Apache 2.2
Order deny,allow
Deny from
Deny from

## Apache 2.4
# Require all granted
# Require not ip
# Require not ip

Hidden files and directories (those whose names start with a dot .) should most, if not all, of the time be secured. For example: .htaccess, .htpasswd, .git, .hg...

RewriteCond %{SCRIPT_FILENAME} -d [OR]
RewriteCond %{SCRIPT_FILENAME} -f
RewriteRule "(^|/)\." - [F]

Alternatively, you can just raise a “Not Found” error, giving the attacker no clue:

RedirectMatch 404 /\..*$

These files may be left by some text/HTML editors (like Vi/Vim) and pose a great security danger if exposed to public.

<FilesMatch "(\.(bak|config|dist|fla|inc|ini|log|psd|sh|sql|swp)|~)$">
    ## Apache 2.2
    Order allow,deny
    Deny from all
    Satisfy All

    ## Apache 2.4
    # Require all denied


Options All -Indexes

RewriteEngine on
# Remove the following line if you want to block blank referrer too
RewriteCond %{HTTP_REFERER} !^$

RewriteCond %{HTTP_REFERER} !^https?://(.+\.)? [NC]
RewriteRule \.(jpe?g|png|gif|bmp)$ - [NC,F,L]

# If you want to display a “blocked” banner in place of the hotlinked image,
# replace the above rule with:
# RewriteRule \.(jpe?g|png|gif|bmp) [R,L]

Sometimes you want to disable image hotlinking from some bad guys only.

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^https?://(.+\.)?badsite\.com [NC,OR]
RewriteCond %{HTTP_REFERER} ^https?://(.+\.)?badsite2\.com [NC,OR]
RewriteRule \.(jpe?g|png|gif|bmp)$ - [NC,F,L]

# If you want to display a “blocked” banner in place of the hotlinked image,
# replace the above rule with:
# RewriteRule \.(jpe?g|png|gif|bmp) [R,L]

First you need to create a .htpasswd file somewhere in the system:

htpasswd -c /home/fellowship/.htpasswd boromir

Then you can use it for authentication:

AuthType Basic
AuthName "One does not simply"
AuthUserFile /home/fellowship/.htpasswd
Require valid-user

AuthName "One still does not simply"
AuthType Basic
AuthUserFile /home/fellowship/.htpasswd

<Files "one-ring.o">
Require valid-user

<FilesMatch ^((one|two|three)-rings?\.o)$>
Require valid-user

This denies access for all users who are coming from (referred by) a specific domain. Source

RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} somedomain\.com [NC,OR]
RewriteCond %{HTTP_REFERER} anotherdomain\.com
RewriteRule .* - [F]

This prevents the website to be framed (i.e. put into an iframe tag), when still allows framing for a specific URI.

SetEnvIf Request_URI "/starry-night" allow_framing=true
Header set X-Frame-Options SAMEORIGIN env=!allow_framing

<IfModule mod_deflate.c>

    # Force compression for mangled headers.
    <IfModule mod_setenvif.c>
        <IfModule mod_headers.c>
            SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding
            RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding

    # Compress all output labeled with one of the following MIME-types
    # (for Apache versions below 2.3.7, you don't need to enable `mod_filter`
    #  and can remove the `<IfModule mod_filter.c>` and `</IfModule>` lines
    #  as `AddOutputFilterByType` is still in the core directives).
    <IfModule mod_filter.c>
        AddOutputFilterByType DEFLATE application/atom+xml \
                                      application/javascript \
                                      application/json \
                                      application/rss+xml \
                                      application/ \
                                      application/x-font-ttf \
                                      application/x-web-app-manifest+json \
                                      application/xhtml+xml \
                                      application/xml \
                                      font/opentype \
                                      image/svg+xml \
                                      image/x-icon \
                                      text/css \
                                      text/html \
                                      text/plain \
                                      text/x-component \



Expires headers tell the browser whether they should request a specific file from the server or just grab it from the cache. It is advisable to set static content's expires headers to something far in the future.

If you don’t control versioning with filename-based cache busting, consider lowering the cache time for resources like CSS and JS to something like 1 week. Source

<IfModule mod_expires.c>
    ExpiresActive on
    ExpiresDefault                                      "access plus 1 month"

  # CSS
    ExpiresByType text/css                              "access plus 1 year"

  # Data interchange
    ExpiresByType application/json                      "access plus 0 seconds"
    ExpiresByType application/xml                       "access plus 0 seconds"
    ExpiresByType text/xml                              "access plus 0 seconds"

  # Favicon (cannot be renamed!)
    ExpiresByType image/x-icon                          "access plus 1 week"

  # HTML components (HTCs)
    ExpiresByType text/x-component                      "access plus 1 month"

  # HTML
    ExpiresByType text/html                             "access plus 0 seconds"

  # JavaScript
    ExpiresByType application/javascript                "access plus 1 year"

  # Manifest files
    ExpiresByType application/x-web-app-manifest+json   "access plus 0 seconds"
    ExpiresByType text/cache-manifest                   "access plus 0 seconds"

  # Media
    ExpiresByType audio/ogg                             "access plus 1 month"
    ExpiresByType image/gif                             "access plus 1 month"
    ExpiresByType image/jpeg                            "access plus 1 month"
    ExpiresByType image/png                             "access plus 1 month"
    ExpiresByType video/mp4                             "access plus 1 month"
    ExpiresByType video/ogg                             "access plus 1 month"
    ExpiresByType video/webm                            "access plus 1 month"

  # Web feeds
    ExpiresByType application/atom+xml                  "access plus 1 hour"
    ExpiresByType application/rss+xml                   "access plus 1 hour"

  # Web fonts
    ExpiresByType application/font-woff2                "access plus 1 month"
    ExpiresByType application/font-woff                 "access plus 1 month"
    ExpiresByType application/         "access plus 1 month"
    ExpiresByType application/x-font-ttf                "access plus 1 month"
    ExpiresByType font/opentype                         "access plus 1 month"
    ExpiresByType image/svg+xml                         "access plus 1 month"

By removing the ETag header, you disable caches and browsers from being able to validate files, so they are forced to rely on your Cache-Control and Expires header. Source

<IfModule mod_headers.c>
    Header unset ETag
FileETag None

php_value <key> <val>

# For example:
php_value upload_max_filesize 50M
php_value max_execution_time 240

ErrorDocument 500 "Houston, we have a problem."
ErrorDocument 401
ErrorDocument 404 /errors/halflife3.html

Sometimes you want to force the browser to download some content instead of displaying it.

<Files *.md>
    ForceType application/octet-stream
    Header set Content-Disposition attachment

Now there is a yang to this yin:

Sometimes you want to force the browser to display some content instead of downloading it.

<FilesMatch "\.(tex|log|aux)$">
    Header set Content-Type text/plain

CDN-served webfonts might not work in Firefox or IE due to CORS. This snippet solves the problem.

<IfModule mod_headers.c>
    <FilesMatch "\.(eot|otf|ttc|ttf|woff|woff2)$">
        Header set Access-Control-Allow-Origin "*"


Your text content should always be UTF-8 encoded, no?

# Use UTF-8 encoding for anything served text/plain or text/html
AddDefaultCharset utf-8

# Force UTF-8 for a number of file formats
AddCharset utf-8 .atom .css .js .json .rss .vtt .xml


If you’re on a shared host, chances are there are more than one version of PHP installed, and sometimes you want a specific version for your website. The following snippet should switch the PHP version for you.

AddHandler application/x-httpd-php56 .php

# Alternatively, you can use AddType
AddType application/x-httpd-php56 .php

Compatibility View in IE may affect how some websites are displayed. The following snippet should force IE to use the Edge Rendering Engine and disable the Compatibility View.

<IfModule mod_headers.c>
    BrowserMatch MSIE is-msie
    Header set X-UA-Compatible IE=edge env=is-msie

If WebP images are supported and an image with a .webp extension and the same name is found at the same place as the jpg/png image that is going to be served, then the WebP image is served instead.

RewriteEngine On
RewriteCond %{HTTP_ACCEPT} image/webp
RewriteCond %{DOCUMENT_ROOT}/$1.webp -f
RewriteRule (.+)\.(jpe?g|png)$ $1.webp [T=image/webp,E=accept:1]


Update Auto apt Debian 2018-03-19T20:08:00+01:00 2018-03-19T20:08:00+01:00

en 1 commande :p

Simplement en installant unattended-upgrades avec APT :

apt install unattended-upgrades
$ apt install unattended-upgrades

ATTENTION !! Il faut paramétrer le paquet avec cette commande:

dpkg-reconfigure unattended-upgrades

Tapez cette commande (root):

$  dpkg-reconfigure unattended-upgrades

Vous devriez avoir un texte comme ceci apparaitre:

Il est important de mettre régulièrement son système à jour pour maintenir un haut niveau de sécurité. 
Par défaut,
les mises à jour doivent être appliquées manuellement à l'aide d'un outil de gestion de paquets.
Autrement, vous pouvez choisir d'automatiser ce processus de télé
chargement et d'installation des mises à jour de sécurité.                                                                                                                                    

Faut-il automatiquement télécharger et installer les mises à jour de la version stable ? Oui

Répondre . OUI

Veuillez indiquer une valeur pour le motif de correspondance « Origin-Pattern » pour unattended-upgrades. Un paquet sera mis à jour uniquement si ses métadonnées correspondent à tous les mots clés indiqués ici. Motif « Origin-Pattern » auquel les paquets doivent correspondre pour être mis à jour : │ "origin=Debian,codename=${distro_codename},label=Debian-Security";________________________________________ │ Ok

Par défaut on sélection : OK

Voilà la mise à jour automatique est bien mis en place.


Enjoy !



youtube-dl 2018-03-18T23:39:00+01:00 2018-03-18T23:39:00+01:00

Youtube-dl Fix Too Many Requests avec youtube-dl

Corriger l'erreur HTTP Error 429

Too Many Requests avec youtube-dl

HTTP Error 429

Too Many Requests avec youtube-dl.

ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError());
please report this issue on .
Make sure you are using the latest version; type youtube-dl -U to update.
Be sure to call youtube-dl with the --verbose flag and include its complete output.

Il suffit simplement de de rajouter l'option --force-ipv4 dans votre commande !

$ youtube-dl  -o "%(title)s.%(ext)s" --force-ipv4 


Ensuite on rajoute un alias pour ce simplifié la life, on rajoute dans  .bashrc :

$ alias ytdl='youtube-dl  -o "%(title)s.%(ext)s" --extract-audio --audio-format mp3 -k --force-ipv4 $1' 

Cette première ligne download et garde la vidèo et le mp3

$ alias ytdlV='youtube-dl  -o "%(title)s.%(ext)s" --force-ipv4 $1'

Cet ligne download et garde que la vidéo.

$ youtube-dl

 7Uexuyy_HL8: Downloading webpage
 7Uexuyy_HL8: Downloading video info webpage
 7Uexuyy_HL8: Extracting video information<
[download] Destination: Stupeflip - The Antidote.f248.webm
[download] 100% of 1.50MiB in 00:00
[download] Destination: Stupeflip - The Antidote.f251.webm
[download] 100% of 3.24MiB in 00:00
[ffmpeg] Merging formats into "Stupeflip - The Antidote.webm"
Deleting original file Stupeflip - The Antidote.f248.webm (pass -k to keep)
Deleting original file Stupeflip - The Antidote.f251.webm (pass -k to keep)

tips youtube-dl debian

Special pour OVH rajoutez --force-ipv4 ou -4

Update before use:

youtube-dl -U, --update


 alias ytdl='youtube-dl --force-ipv4 -i $1'
 alias ytdlvid='cd /home/media/Youtube  && youtube-dl -4  -o "%(title)s.%(ext)s" -i  $1'


  alias ytdlmp32="youtube-dl -4 --extract-audio --audio-format best --audio-quality 0 -i $1"
  alias ytdlmp3='cd /home/media/YoutubeMP3/ && youtube-dl  -o "%(title)s.%(ext)s"  --audio-format best  --force-ipv4  -x  -i $1'
 #youtube-dl -4 --extract-audio --audio-format mp3
  General Options:
    -h, --help                       Print this help text and exit
    --version                        Print program version and exit
    -U, --update                     Update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed)
    -i, --ignore-errors              Continue on download errors, for example to skip unavailable videos in a playlist
    --abort-on-error                 Abort downloading of further videos (in the playlist or the command line) if an error occurs
    --dump-user-agent                Display the current browser identification
    --list-extractors                List all supported extractors
    --extractor-descriptions         Output descriptions of all supported extractors
    --force-generic-extractor        Force extraction to use the generic extractor
    --default-search PREFIX          Use this prefix for unqualified URLs. For example "gvsearch2:" downloads two videos from google videos for youtube-dl "large apple". Use the value "auto" to let
                                     youtube-dl guess ("auto_warning" to emit a warning when guessing). "error" just throws an error. The default value "fixup_error" repairs broken URLs, but emits an
                                     error if this is not possible instead of searching.
    --ignore-config                  Do not read configuration files. When given in the global configuration file /etc/youtube-dl.conf: Do not read the user configuration in ~/.config/youtube-
                                     dl/config (%APPDATA%/youtube-dl/config.txt on Windows)
    --config-location PATH           Location of the configuration file; either the path to the config or its containing directory.
    --flat-playlist                  Do not extract the videos of a playlist, only list them.
    --mark-watched                   Mark videos watched (YouTube only)
    --no-mark-watched                Do not mark videos watched (YouTube only)
    --no-color                       Do not emit color codes in output

  Network Options:
    --proxy URL                      Use the specified HTTP/HTTPS/SOCKS proxy. To enable SOCKS proxy, specify a proper scheme. For example socks5:// Pass in an empty string (--proxy
                                     "") for direct connection
    --socket-timeout SECONDS         Time to wait before giving up, in seconds
    --source-address IP              Client-side IP address to bind to
    -4, --force-ipv4                 Make all connections via IPv4
    -6, --force-ipv6                 Make all connections via IPv6

  Geo Restriction:
    --geo-verification-proxy URL     Use this proxy to verify the IP address for some geo-restricted sites. The default proxy specified by --proxy (or none, if the option is not present) is used for
                                     the actual downloading.
    --geo-bypass                     Bypass geographic restriction via faking X-Forwarded-For HTTP header
    --no-geo-bypass                  Do not bypass geographic restriction via faking X-Forwarded-For HTTP header
    --geo-bypass-country CODE        Force bypass geographic restriction with explicitly provided two-letter ISO 3166-2 country code
    --geo-bypass-ip-block IP_BLOCK   Force bypass geographic restriction with explicitly provided IP block in CIDR notation

  Video Selection:
    --playlist-start NUMBER          Playlist video to start at (default is 1)
    --playlist-end NUMBER            Playlist video to end at (default is last)
    --playlist-items ITEM_SPEC       Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "--playlist-items 1,2,5,8" if you want to download videos
                                     indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist-items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13.
    --match-title REGEX              Download only matching titles (regex or caseless sub-string)
    --reject-title REGEX             Skip download for matching titles (regex or caseless sub-string)
    --max-downloads NUMBER           Abort after downloading NUMBER files
    --min-filesize SIZE              Do not download any videos smaller than SIZE (e.g. 50k or 44.6m)
    --max-filesize SIZE              Do not download any videos larger than SIZE (e.g. 50k or 44.6m)
    --date DATE                      Download only videos uploaded in this date
    --datebefore DATE                Download only videos uploaded on or before this date (i.e. inclusive)
    --dateafter DATE                 Download only videos uploaded on or after this date (i.e. inclusive)
    --min-views COUNT                Do not download any videos with less than COUNT views
    --max-views COUNT                Do not download any videos with more than COUNT views
    --match-filter FILTER            Generic video filter. Specify any key (see the "OUTPUT TEMPLATE" for a list of available keys) to match if the key is present, !key to check if the key is not
                                     present, key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, key = 'LITERAL' (like "uploader = 'Mike Smith'",
                                     also works with !=) to match against a string literal and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?)
                                     after the operator. For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not
                                     available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" .
    --no-playlist                    Download only the video, if the URL refers to a video and a playlist.
    --yes-playlist                   Download the playlist, if the URL refers to a video and a playlist.
    --age-limit YEARS                Download only videos suitable for the given age
    --download-archive FILE          Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it.
    --include-ads                    Download advertisements as well (experimental)

  Download Options:
    -r, --limit-rate RATE            Maximum download rate in bytes per second (e.g. 50K or 4.2M)
    -R, --retries RETRIES            Number of retries (default is 10), or "infinite".
    --fragment-retries RETRIES       Number of retries for a fragment (default is 10), or "infinite" (DASH, hlsnative and ISM)
    --skip-unavailable-fragments     Skip unavailable fragments (DASH, hlsnative and ISM)
    --abort-on-unavailable-fragment  Abort downloading when some fragment is not available
    --keep-fragments                 Keep downloaded fragments on disk after downloading is finished; fragments are erased by default
    --buffer-size SIZE               Size of download buffer (e.g. 1024 or 16K) (default is 1024)
    --no-resize-buffer               Do not automatically adjust the buffer size. By default, the buffer size is automatically resized from an initial value of SIZE.
    --http-chunk-size SIZE           Size of a chunk for chunk-based HTTP downloading (e.g. 10485760 or 10M) (default is disabled). May be useful for bypassing bandwidth throttling imposed by a
                                     webserver (experimental)
    --playlist-reverse               Download playlist videos in reverse order
    --playlist-random                Download playlist videos in random order
    --xattr-set-filesize             Set file xattribute ytdl.filesize with expected file size
    --hls-prefer-native              Use the native HLS downloader instead of ffmpeg
    --hls-prefer-ffmpeg              Use ffmpeg instead of the native HLS downloader
    --hls-use-mpegts                 Use the mpegts container for HLS videos, allowing to play the video while downloading (some players may not be able to play it)
    --external-downloader COMMAND    Use the specified external downloader. Currently supports aria2c,avconv,axel,curl,ffmpeg,httpie,wget
    --external-downloader-args ARGS  Give these arguments to the external downloader

  Filesystem Options:
    -a, --batch-file FILE            File containing URLs to download ('-' for stdin), one URL per line. Lines starting with '#', ';' or ']' are considered as comments and ignored.
    --id                             Use only video ID in file name
    -o, --output TEMPLATE            Output filename template, see the "OUTPUT TEMPLATE" for all the info
    --autonumber-start NUMBER        Specify the start value for %(autonumber)s (default is 1)
    --restrict-filenames             Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames
    -w, --no-overwrites              Do not overwrite files
    -c, --continue                   Force resume of partially downloaded files. By default, youtube-dl will resume downloads if possible.
    --no-continue                    Do not resume partially downloaded files (restart from beginning)
    --no-part                        Do not use .part files - write directly into output file
    --no-mtime                       Do not use the Last-modified header to set the file modification time
    --write-description              Write video description to a .description file
    --write-info-json                Write video metadata to a .info.json file
    --write-annotations              Write video annotations to a .annotations.xml file
    --load-info-json FILE            JSON file containing the video information (created with the "--write-info-json" option)
    --cookies FILE                   File to read cookies from and dump cookie jar in
    --cache-dir DIR                  Location in the filesystem where youtube-dl can store some downloaded information permanently. By default $XDG_CACHE_HOME/youtube-dl or ~/.cache/youtube-dl . At
                                     the moment, only YouTube player files (for videos with obfuscated signatures) are cached, but that may change.
    --no-cache-dir                   Disable filesystem caching
    --rm-cache-dir                   Delete all filesystem cache files

  Thumbnail images:
    --write-thumbnail                Write thumbnail image to disk
    --write-all-thumbnails           Write all thumbnail image formats to disk
    --list-thumbnails                Simulate and list all available thumbnail formats

  Verbosity / Simulation Options:
    -q, --quiet                      Activate quiet mode
    --no-warnings                    Ignore warnings
    -s, --simulate                   Do not download the video and do not write anything to disk
    --skip-download                  Do not download the video
    -g, --get-url                    Simulate, quiet but print URL
    -e, --get-title                  Simulate, quiet but print title
    --get-id                         Simulate, quiet but print id
    --get-thumbnail                  Simulate, quiet but print thumbnail URL
    --get-description                Simulate, quiet but print video description
    --get-duration                   Simulate, quiet but print video length
    --get-filename                   Simulate, quiet but print output filename
    --get-format                     Simulate, quiet but print output format
    -j, --dump-json                  Simulate, quiet but print JSON information. See the "OUTPUT TEMPLATE" for a description of available keys.
    -J, --dump-single-json           Simulate, quiet but print JSON information for each command-line argument. If the URL refers to a playlist, dump the whole playlist information in a single line.
    --print-json                     Be quiet and print the video information as JSON (video is still being downloaded).
    --newline                        Output progress bar as new lines
    --no-progress                    Do not print progress bar
    --console-title                  Display progress in console titlebar
    -v, --verbose                    Print various debugging information
    --dump-pages                     Print downloaded pages encoded using base64 to debug problems (very verbose)
    --write-pages                    Write downloaded intermediary pages to files in the current directory to debug problems
    --print-traffic                  Display sent and read HTTP traffic
    -C, --call-home                  Contact the youtube-dl server for debugging
    --no-call-home                   Do NOT contact the youtube-dl server for debugging

    --encoding ENCODING              Force the specified encoding (experimental)
    --no-check-certificate           Suppress HTTPS certificate validation
    --prefer-insecure                Use an unencrypted connection to retrieve information about the video. (Currently supported only for YouTube)
    --user-agent UA                  Specify a custom user agent
    --referer URL                    Specify a custom referer, use if the video access is restricted to one domain
    --add-header FIELD:VALUE         Specify a custom HTTP header and its value, separated by a colon ':'. You can use this option multiple times
    --bidi-workaround                Work around terminals that lack bidirectional text support. Requires bidiv or fribidi executable in PATH
    --sleep-interval SECONDS         Number of seconds to sleep before each download when used alone or a lower bound of a range for randomized sleep before each download (minimum possible number of
                                     seconds to sleep) when used along with --max-sleep-interval.
    --max-sleep-interval SECONDS     Upper bound of a range for randomized sleep before each download (maximum possible number of seconds to sleep). Must only be used along with --min-sleep-interval.

  Video Format Options:
    -f, --format FORMAT              Video format code, see the "FORMAT SELECTION" for all the info
    --all-formats                    Download all available video formats
    --prefer-free-formats            Prefer free video formats unless a specific one is requested
    -F, --list-formats               List all available formats of requested videos
    --youtube-skip-dash-manifest     Do not download the DASH manifests and related data on YouTube videos
    --merge-output-format FORMAT     If a merge is required (e.g. bestvideo+bestaudio), output to given container format. One of mkv, mp4, ogg, webm, flv. Ignored if no merge is required

  Subtitle Options:
    --write-sub                      Write subtitle file
    --write-auto-sub                 Write automatically generated subtitle file (YouTube only)
    --all-subs                       Download all the available subtitles of the video
    --list-subs                      List all available subtitles for the video
    --sub-format FORMAT              Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best"
    --sub-lang LANGS                 Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags

  Authentication Options:
    -u, --username USERNAME          Login with this account ID
    -p, --password PASSWORD          Account password. If this option is left out, youtube-dl will ask interactively.
    -2, --twofactor TWOFACTOR        Two-factor authentication code
    -n, --netrc                      Use .netrc authentication data
    --video-password PASSWORD        Video password (vimeo, smotri, youku)

  Adobe Pass Options:
    --ap-mso MSO                     Adobe Pass multiple-system operator (TV provider) identifier, use --ap-list-mso for a list of available MSOs
    --ap-username USERNAME           Multiple-system operator account login
    --ap-password PASSWORD           Multiple-system operator account password. If this option is left out, youtube-dl will ask interactively.
    --ap-list-mso                    List all supported multiple-system operators

  Post-processing Options:
    -x, --extract-audio              Convert video files to audio-only files (requires ffmpeg or avconv and ffprobe or avprobe)
    --audio-format FORMAT            Specify audio format: "best", "aac", "flac", "mp3", "m4a", "opus", "vorbis", or "wav"; "best" by default; No effect without -x
    --audio-quality QUALITY          Specify ffmpeg/avconv audio quality, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default 5)
    --recode-video FORMAT            Encode the video to another format if necessary (currently supported: mp4|flv|ogg|webm|mkv|avi)
    --postprocessor-args ARGS        Give these arguments to the postprocessor
    -k, --keep-video                 Keep the video file on disk after the post-processing; the video is erased by default
    --no-post-overwrites             Do not overwrite post-processed files; the post-processed files are overwritten by default
    --embed-subs                     Embed subtitles in the video (only for mp4, webm and mkv videos)
    --embed-thumbnail                Embed thumbnail in the audio as cover art
    --add-metadata                   Write metadata to the video file
    --metadata-from-title FORMAT     Parse additional metadata like song title / artist from the video title. The format syntax is the same as --output. Regular expression with named capture groups
                                     may also be used. The parsed parameters replace existing values. Example: --metadata-from-title "%(artist)s - %(title)s" matches a title like "Coldplay -
                                     Paradise". Example (regex): --metadata-from-title "(?P<artist>.+?) - (?P<title>.+)"
    --xattrs                         Write metadata to the video file's xattrs (using dublin core and xdg standards)
    --fixup POLICY                   Automatically correct known faults of the file. One of never (do nothing), warn (only emit a warning), detect_or_warn (the default; fix file if we can, warn
    --prefer-avconv                  Prefer avconv over ffmpeg for running the postprocessors
    --prefer-ffmpeg                  Prefer ffmpeg over avconv for running the postprocessors (default)
    --ffmpeg-location PATH           Location of the ffmpeg/avconv binary; either the path to the binary or its containing directory.
    --exec CMD                       Execute a command on the file after downloading, similar to find's -exec syntax. Example: --exec 'adb push {} /sdcard/Music/ && rm {}'
    --convert-subs FORMAT            Convert the subtitles to other format (currently supported: srt|ass|vtt|lrc)

Happy Download !


Secu tools 2018-03-17T03:22:00+01:00 2018-03-17T03:22:00+01:00



Simple IOC Scanner
Scanner for Simple Indicators of Compromise


PHP scanner written in Python for identifying PHP backdoors and php malicious code. This tool is mainly reusing below mentioned tools. To use this tool, you need to install yara library for Python from the source.


Does its very best to detect obfuscated/dodgy code as well as files using PHP functions often used in malwares/webshells. Detection is performed by crawling the filesystem and testing files against a set of YARA rules.


Scans the current working directory and display results with the score greater than the given value. Released under the MIT license.

Yasca (GitHub)

an open source program which looks for security vulnerabilities, code-quality, performance, and conformance.


Web Security Scanner

Acunetix WVS automatically checks your web applications for SQL Injection, XSS & other web vulnerabilities.


A static source code analyser for vulnerabilities in PHP .scripts


an open source web server scanner which performs comprehensive tests against web servers for multiple items, including potentially dangerous files/program.

ClamAV extension for PHP (php-clamav) - a fork of the php-clamavlib project allows to incorporate virus scanning features in your PHP scripts.

Older projects: securityscanner, phpsecaudit.


Check also the following security websites:

PHP Security Consortium

Founded in January 2005, the PHP Security Consortium (PHPSC) is an international group of PHP experts dedicated to promoting secure programming practices within the PHP community. Members of the PHPSC seek to educate PHP developers about security through a variety of resources, including documentation, tools, and standards.