How to run Proxmox with only a single public IP address

IPv4 address is becoming rarer by each day. In some cases, it can be pretty hard to get multiple IPv4 address for your Proxmox server.

Thankfully, Proxmox is basically a Debian Linux OS with Proxmox layer on top of that. So that gives us quite a lot of flexibility.

This tutorial will help you to create a fully functional Proxmox server running multiple containers & virtual machines, using only a single IPv4 address.

These are the main steps :

  1. Create port forwarding rules
  2. Make sure it’s executed automatically everytime the server is restarted
  3. Setup a reverse-proxy server : to forward HTTP/S requests to the correct container / virtual machine
  4. Setup HTTPS

For CT (container) / VM (virtual machine) that contains webserver, point 3 is important – because there’s only one public IP address, so there’s only one port 80 and 443 that’s facing the Internet.

By forwarding port 80 and 443 to a reverse-proxy in a CT, then we’ll be able to forward incoming visitors, by hostname / domain name, to the correct CT/VM.

1. CREATE PORT FORWARDING RULES

Modify the following to match your host’s interface name & CT/VM’s internal IP addresses, then copy-paste to terminal :

###### All HTTP/S traffic are forwarded to reverse proxy
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to 10.10.50.1:80

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to 10.10.50.1:443

###### SSH ports to each existing CT/VM
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22101 -j DNAT --to 10.10.50.1:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22102 -j DNAT --to 10.10.50.2:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22103 -j DNAT --to 10.10.50.3:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22104 -j DNAT --to 10.10.50.4:22

Then we save it :

iptables-save > /etc/iptables.rules

2. EXECUTE IPTABLES AT SERVER RESTART

Edit /etc/network/interfaces file, find your network interface name that’s facing the Internet (in my case, vmbr0) – then add the pre-up line as follows :

auto vmbr0
pre-up iptables-restore < /etc/iptables.rules

3. SETUP REVERSE-PROXY

In a CT, install Nginx. Then for each domain, create a configuration file like this, for example: /etc/nginx/sites-available/www.my_website.com :

server {
listen 80;
server_name www.my_website.com;

location / {
    proxy_pass http://10.10.50.2:80;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

}

To activate it (assuming you’re using Ubuntu) link it to /etc/nginx/sites-enabled/ , then restart Nginx :

ln -s /etc/nginx/sites-available/www.my_website.com /etc/nginx/sites-enabled/www.my_website.com

/etc/init.d/nginx restart

note: as noted before, all HTTP/s traffic will have to go through this reverse-proxy. You may wish to tune this Nginx installation accordingly.

4. SETUP HTTPS

It’s very easy with Let’s Encrypt once you’ve done point 3 above. Do the following on the reverse-proxy CT :

sudo apt-get update ; sudo apt-get install -y certbot python3-certbot-nginx

sudo certbot --nginx

sudo /etc/init.d/nginx restart

Reference:

https://gist.githubusercontent.com/basoro/b522864678a70b723de970c4272547c8/raw/a985657453f72683040fbe38b1db6b1989618116/proxmox-proxy

Installing HTTrack on Ubuntu from Source

Today I needed to have the latest version of HTTrack installed to make a (static) mirror of a website that I managed

After a few attempts, this is how you compile & install HTTrack from source on Ubuntu :

wget "http://download.httrack.com/cserv.php3?File=httrack.tar.gz"

mv cserv.php3\?File\=httrack.tar.gz  httrack.tar.gz

tar xzvf httrack.tar.gz

cd httrack-3.49.2/

### the following is the key to a successful install
apt-get install zlib1g-dev libssl-dev build-essential

./configure && make && make install

Cutting a Table out of a mysqldump output file

I was restoring the backup of a MySQL 5.x server into MySQL 8.x server – and found out that it corrupt the MySQL 8.x ‘s mysql table

Which stores the usernames and passwords.

So I had to delete the mysql table from the backup, before trying to restore it again

Turn out it’s pretty easy, just will take some time since it’s a pretty big backup :

# search for beginning of 'mysql' table
cat backup.mysql | grep -n Current Database: `mysql`

# 155604:-- Current Database: `mysql`

# search for ending of 'mysql' table
tail -n +155604 backup.mysql | grep -n "Current Database"

# 1:  -- Current Database: `mysql`
# 916:-- Current Database: `phpmyadmin`

# cut that table out
head -155603 backup.mysql                > new.mysql
tail -n +$(( 155603+916 )) backup.mysql >> new.mysql

# voila !

Crontab runs on different timezone : here’s the fix

A few days ago I got reports that a server is running its cron jobs at strange times. Logged in, and indeed it was. A huge backup was running during peak hours. Saying that it disrupt people’s work is an understatement.

To my surprise, the explanation for this issue can not be found straightaway. Took some googling to find the cause. And even more time to find the correct solution.

So to cut the chase – /etc/localtime was a link to /usr/share/zoneinfo/America/NewYork

Changed it to /usr/share/zoneinfo/Asia/Jakarta – and voila, now the cronjobs are running at the correct times.

Hope it helps

XCTB – X Compression Tool Benchmarker

I deal with a lot of big files at work. While storage capacity is not infinite indeed. So it’s in my interest to keep the file sizes as low as possible.

One way to achieve that is by using compression. Especially when dealing with log files, or database archive, you can save a ton of space with the right compression tool.

But space saving is not the only consideration.

You also need to weighs in other factors. Such as :

  • File type : different tool will compress different type of file differently
  • CPU multi-core capabilities
  • Compression speed
  • Compression size
  • Decompression time

But there are so many great compression tools available in Unix / Linux. It can be really confusing to choose which one to use even for a seasoned expert.

So I created X Compression Tool Benchmarker to help with this.

Features :

  • Test any kind of file : just put the file’s name as the parameter when calling the script. Then it will be tested against all the specified compression tools.
  • Add more compression tool easily : just edit the compressor_list & ext_file variable, and that’s it
  • Fire and forget : just run the script, and forget it. It will run without needing any intervention
  • CSV output : ready to be opened with Libre Office / Excel, and made into graphs in seconds.

Here’s a sample result for a Database archive file (type MySQL dump) :

The bar chart on top of this article is based from this result.

As you can see, currently this script will benchmark the following compression tools automatically : pigz – gzip – bzip2 – pbzip2 – lrzip – rzip – zstd – pixz – plzip – xz

The result, for each different file types, may surprise you 🙂

For example ; I was surprised to see rzip beat lrzip – because lrzip is supposed to be the enhancement of rzip.

Then I was even more surprised to find out that :

  • I was testing Debian Buster’s version of rzip, which turned out to be pretty old – it does not even have multi-thread/core capability
  • But when I tested the latest version of rzip, which can use all the 16 cores in my server – it turned out to be slower than the old rzip from Debian Buster !
  • No, disk speed is not an issue – I made sure that all the benchmark was run from NVME SSD

So I was grinning at how Debian Buster packaged a very old version of rzip instead of the new one – turned out the joke’s on me : the old rzip perform better than the new one. Even without the multi-core capability.

Also it was amazing to see how really REALLY fast zstd is, while still giving decent compression size. When you absolutely need compression speed, this not so well known compression tool turned out to be the clear winner.

And so on, etc

Yes, indeed I had fun 🙂

I hope you will too. Enjoy !


UPDATE : My friend , Eko Juniarto, published his results here and have permitted me to publish it here as well – thanks. Very interesting, indeed.

BCA – daftar bank korespondensi di Amerika

Suatu hari saya ditanyakan hal ini (bank korespondensi BCA di Amerika) setelah selesai seminar di Hawaii, untuk mentransfer honorarium saya.

Ternyata info ini tidak ketemu dimana-mana.

Tanya via Call center BCA di 1500888, mereka juga tidak tahu.

Akhirnya ketika istri saya kebetulan ada perlu ke BCA, dia tanyakan sekalian. Dijawab bahwa musti saya sendiri yang datang menanyakan.

Istri saya marah besar 😀 hahahaha

Apa logikanya cuma menanya “informasi bank korespondensi BCA” dengan saya musti datang sendiri ke BCA 😀 ha ha ha

Kalau karena musti nasabah BCA – istri saya juga nasabah BCA, dia juga punya rekening di BCA.

Akhirnya customer service BCA menyerah, dan memberitahu informasi tsb, hahaha. Ada-ada saja.

Saya lampirkan informasi tsb disini. Maka moga yang membutuhkannya tidak perlu mengalami kekonyolan serupa & terbuang-buang waktunya juga.

NAMA BANK : Bank of New York
ABA ROUTING NUMBER : IRVTUS3N

NAMA BANK : Bank of America
ABA ROUTING NUMBER : BOFAUS6S

NAMA BANK : Wells Fargo Bank
ABA ROUTING NUMBER : PNBPUS3NNYC

NAMA BANK : JP Morgan Chase Bank
ABA ROUTING NUMBER : CHASUS33

NAMA BANK : Citibank
ABA ROUTING NUMBER : CITIUS33

NAMA BANK : Standard Chartered Bank
ABA ROUTING NUMBER : SCBLUS33

Instalasi w3af

w3af (Web Application Attack and Audit Framework) adalah software yang bisa Anda gunakan untuk memeriksa keamanan aplikasi / website Anda.

Cara instalasi & penggunaannya sangat mudah, silakan ikuti panduan ini :


sudo apt-get update ; sudo apt-get -y install python-pip git

git clone https://github.com/andresriancho/w3af.git
cd w3af/
./w3af_console
# install semua paket yang diminta, lalu

./tmp/w3af_dependency_install.sh

Maka kini w3af & semua paket software yang dibutuhkannya telah terpasang.

Lalu buat file bernama MyScript.w3af, dengan isi sbb :

(CATATAN : jangan gunakan dulu plugin “redos” – terakhir saya gunakan, plugin redos ini berjalan selama 2 hari dan menghabiskan disk space di server saya. Hati-hati)


# -----------------------------------------------------------------------------------------------------------
# W3AF AUDIT SCRIPT FOR WEB APPLICATION
# -----------------------------------------------------------------------------------------------------------
#Configure HTTP settings
http-settings
set timeout 30
back
#Configure scanner global behaviors
http-settings
set timeout 20
set max_requests_per_second 100
back
misc-settings
set max_discovery_time 20
set fuzz_cookies True
set fuzz_form_files True
set fuzz_url_parts True
set fuzz_url_filenames True
back
plugins
#Configure entry point (CRAWLING) scanner
crawl web_spider
crawl config web_spider
set only_forward False
set ignore_regex (?i)(logout|disconnect|signout|exit)+
back
#Configure vulnerability scanners
##Specify list of AUDIT plugins type to use
audit blind_sqli, buffer_overflow, cors_origin, csrf, eval, file_upload, ldapi, lfi, os_commanding, phishing_vector, response_splitting, sqli, xpath, xss, xst
##Customize behavior of each audit plugin when needed
audit config file_upload
set extensions jsp,php,php2,php3,php4,php5,asp,aspx,pl,cfm,rb,py,sh,ksh,csh,bat,ps,exe
back
##Specify list of GREP plugins type to use (grep plugin is a type of plugin that can find also vulnerabilities or informations disclosure)
grep analyze_cookies, click_jacking, code_disclosure, cross_domain_js, csp, directory_indexing, dom_xss, error_500, error_pages,
html_comments, objects, path_disclosure, private_ip, strange_headers, strange_http_codes, strange_parameters, strange_reason, url_session, xss_protection_header
##Specify list of INFRASTRUCTURE plugins type to use (infrastructure plugin is a type of plugin that can find informations disclosure)
infrastructure server_header, server_status, domain_dot, dot_net_errors
#Configure target authentication
#Configure reporting in order to generate an HTML report
output console, html_file
output config html_file
set output_file /tmp/W3afReport.html
set verbose False
back
output config console
set verbose False
back
back
#Set target informations, do a cleanup and run the scan
target
###### GANTI DENGAN SITUS YANG INGIN ANDA TES ###############
set target https://google.com
set target_os unix
set target_framework php
back
cleanup
start

Simpan file tersebut, lalu jalankan perintah sbb :


./w3af_console ­-s MyScript.w3af

Kini tinggal Anda tunggu sampai selesai, dan setelah itu laporannya bisa dilihat di /tmp/W3afReport.html

Enjoy !

Setup Varnish on Port 80

Sometimes you need to quickly setup Varnish, usually in an emergency (like, your website got featured on Reddit’s frontpage 😀 ), to quickly absorb most of the hits hitting your website.

But the webserver is already using port 80.
Now what ?

Pretty easy actually :

  1. Setup Varnish on other port, say, 6081
  2. Run an iptables command : to forward incoming traffic from port 80 to port 6081
  3. Make sure Varnish uses 127.0.0.1:80 as the backend

Presto – now all the traffic hits Varnish first – which will process them in lightning speed.

Alright, so here’s the gory detail, also available on Pastebin.com : https://pastebin.com/2UBD7s05

Enjoy !

========

apt-get update ; apt-get -y install varnish

# Varnish should be already configured to list on port 6081
# if in doubt, check /etc/default/varnish,
# and look for the following line :
# DAEMON_OPTS="-a :6081

# edit varnish config
vi /etc/varnish/default.vcl

# make sure the .port line is set to 80, like this :
# .port = "80";
# then save & exit

# enable Apache's expires & headers module
a2enmod expires
a2enmod headers

# setup caching for static files
# via .htaccess file
echo "Header unset ETag" >> /var/www/.htaccess
echo "FileETag None" >> /var/www/.htaccess
echo "<ifmodule mod_expires.c>" >> /var/www/.htaccess
echo "<filesmatch \"(?i)^.*\\.(ico|flv|jpg|jpeg|png|gif|js|css)$\">" >> /var/www/.htaccess
echo "ExpiresActive On" >> /var/www/.htaccess
echo "ExpiresDefault \"access plus 2 minute\"" >> /var/www/.htaccess
echo "</filesmatch>" >> /var/www/.htaccess
echo "</ifmodule>" >> /var/www/.htaccess

# enable caching in php.ini
vi /etc/php/7.0/apache2/php.ini

# make sure session.cache_limiter = public
# save & exit

# restart Apache
/etc/init.d/apache2 restart

###### now let's start forwarding traffic to Varnish ######

# enable port forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
vi /etc/sysctl.conf

# add this line at the end of the file :
# net.ipv4.ip_forward = 1

# now here's the command that will actually forward the traffic from port 80 to Varnish
# change eth0 to your computer's network interface name
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 6081

# make sure this iptables setting will become permanent
apt-get -y install iptables-persistent

WordPress Auto-Backup via SSH

This script will enable you to backup your WordPress websites automatically. Just put it in a crontab / automatic scheduling software somewhere.

Also available on Pastebin : https://pastebin.com/nZ2fiL8j

Enjoy.

=====
#!/bin/bash

### THIS SCRIPT ASSUMES THE FOLLOWING
# 1/ You can do SSH password-less login to the server
# How : https://easyengine.io/tutorials/linux/passwordless-authentication-ssh/
# 2/ You have created a correct ~/.my.cnf file
# How : https://easyengine.io/tutorials/mysql/mycnf-preference/

wordpress_server=MyUser@MyServer.com
wordpress_location=/home/MyUser/MyWebsite
backup_location=/MyDisk/MyBackup

mysql_server=mysql.MyWebsite.com
mysql_database=MyDatabase_db

# ====== START BACKUP ============

today=`date +%A`

# backup database
ssh $wordpress_server "mysqldump -h $mysql_server $mysql_database > $wordpress_location/db-$today.mysql"
ssh $wordpress_server "gzip $wordpress_location/db-$today.mysql"

# download everything
rsync -avuz --delete $wordpress_server:$wordpress_location/* $backup_location/

# delete database backup
# so no one can download it via the website
ssh $wordpress_server "rm $wordpress_location/db-$today.mysql.gz"

# done !