How to play Roblox on Linux

I used to play Roblox with my kids from my Linux-based (Ubuntu) laptop – by having a virtual machine set up (on VirtualBox) with Windows, and play there. Of course it’s slow, but I still CAN play.

However one day Roblox decided to disable playing from Virtual Machine. And there goes my little bit of happiness with my children.

One day I found this article about using Steam Link to enable remote access to a Windows desktop, and it got me an idea – can I use this trick to play Roblox again from my laptop?

So I setup Google Chrome to be streamable from my gaming computer on my house’s second floor – and voila, it showed up in my Steam account indeed !

By invoking Google Chrome on my Steam app on Linux – then I can browse to Roblox.com, and then play all the games there.

Mission accomplished !


Use Steam to Stream Your Desktop Instead of Your Games : https://lifehacker.com/use-steam-to-stream-your-desktop-instead-of-your-games-1818722875

Setup OpenVPN Server on Proxmox LXC

I needed to do this, but all the tutorials that I could find are incomplete, or already outdated, such as this.

After hacking around for a while, here’s how to correctly setup OpenVPN server in a container on Proxmox:

IN HOST

# create special device "tun" for OpenVPN
mkdir -p /devcontainer/net
mknod /devcontainer/net/tun c 10 200
chown 100000:100000 /devcontainer/net/tun

# enable your container to use that tun device
# change 124 into your container's number : pct list
echo "lxc.mount.entry: /devcontainer/net dev/net none bind,create=dir" >> /etc/pve/lxc/124.conf

# forward OpenVPN traffic to your container's IP address
# change 10.10.60.6 to your container's IP address
iptables -t nat -A PREROUTING -i vmbr0 -p tcp -m tcp --dport 1194 -j DNAT --to-destination 10.10.60.6:1194

iptables -t nat -A PREROUTING -i vmbr0 -p udp -m udp --dport 1194 -j DNAT --to-destination 10.10.60.6:1194

iptables -t nat -A PREROUTING -i vmbr1 -p tcp -m tcp --dport 53 -j DNAT --to-destination 10.10.60.6:53

# save iptables's rule
iptables-save > /etc/iptables.rules

IN CONTAINER

# execute the automated OpenVPN installation script 
mkdir /root/scripts
cd /root/scripts

wget git.io/vpn --no-check-certificate -O openvpn-install.sh ; chmod +x openvpn-install.sh ; ./openvpn-install.sh
 
# if you'd like to change the default 10.8.0.xxx IP address, do this :
# vi openvpn-install.sh
# :%s/10.8.0/10.88.0/g

# setup NAT, so the OpenVPN clients can connect to the internet 
# while connected to this OpenVPN server
iptables -I POSTROUTING -t nat -s 10.88.0.0/24 -j MASQUERADE

# save iptables's rule
iptables-save > /etc/iptables.rules

After executing the /root/scripts/openvpn-install.sh script , it will result in a file with ovpn extension

Download that to your computer / client,
install OpenVPN client,
and use that ovpn file as the configuration

Enjoy !

OBS (Open Broadcasting Software) As Video source (for Zoom, Google Meet, Skype, etc) in Ubuntu 20.04

I used to use my Android smartphone as webcam for Zoom / Skype / Google Meet / etc because my laptop’s webcam is so bad. This is possible thanks to the Droidcam app.

But sometimes there are problems, like the wifi got interference so the video would slow down or freeze for a while. And for long conference / meeting, it got my phone pretty hot because sometimes I have to charge it. And of course I can’t use my phone while it’s being used as webcam.

So I bought Logitech C920 Pro webcam, and started using it instead. In Linux it’s recognized and can bs used straight away.
But you may need to tweak its image quality a bit using guvcview before being used for work.

The picture quality is not as good as my smartphone’s , because my smartphone is heavily processing the images, so it came out even with HDR quality, in real-time. But as a daily work webcam, this Logitech webcam is good enough.

Then I need to start using Green screen as well with this webcam. There’s one problem – my Green screen is not wide enough to cover the webcam’s wide angle.

With Droidcam, this is not a problem, there’s a “Zoom” feature. So I just Zoom-in, until the green screen fills the view.
But since Logitech does not provide any kind of software for this webcam on Linux, I use OBS instead.

Using OBS, I can set up green screen in it, so we don’t need to use Zoom’s green screen / Virtual Background feature. And also that means all other software (Google Meet, Skype, etc) will automatically got the already green screened video from OBS.

To zoom-in in OBS, I just enlarge the webcam’s image box, until the green screen fill the view.
To activate green screen, I use the Chroma key filter.

choose Tools – V4L2 Video Output to enable OBS as Video Source for other software
Make sure to tick the “Auto Start” option
(no green screen tho when I took this screenshot)

To make OBS become a video source, we’ll need to install obs-v4l2sink : https://github.com/CatxFish/obs-v4l2sink

Turned out there are a few problems installing it in Ubuntu 20.04 , we’ll discuss here the workaround for those:

# Another possible solution is using Snap's version of OBS, 
# which already include v4l2loopback kernel module 
# & obs-v4l2sink plugin
# sudo snap install obs-studio.
# sudo modprobe v4l2loopback video_nr=10 card_label=”OBS Video Source” exclusive_caps=1

# If by any reason you can't use this Snap-based solution, 
# then continue : 


# download needed software for compilation
sudo apt-get update ; sudo apt-get install -y install obs-studio git cmake build-essential libobs-dev ffmpeg qtbase5-dev

cd /tmp ; mkdir myobscode ; cd myobscode

# get OBS' source code
git clone --recursive https://github.com/obsproject/obs-studio.git

# get plugin's source code
git clone https://github.com/CatxFish/obs-v4l2sink

# compile the OBS plugin
cd ~/obs-v4l2sink
mkdir build && cd build
cmake -DLIBOBS_INCLUDE_DIR="../../obs-studio/libobs" -DCMAKE_INSTALL_PREFIX=/usr ..

make -j4
sudo make install
sudo cp v4l2sink.so /usr/lib/obs-plugins/
sudo cp /usr/lib/obs-plugins/v4l2sink.so /usr/lib/x86_64-linux-gnu/obs-plugins/

# Turned out we need to compile and build v4l2loopback by ourselves - this is because the Ubuntu's version is too old
# Thanks to user jplandrain : 
# https://github.com/CatxFish/obs-v4l2sink/issues/54#issuecomment-722966599
cd ..
sudo apt-get remove v4l2loopback-dkms
git clone --branch v0.12.5 https://github.com/umlaeute/v4l2loopback.git
cd v4l2loopback
make && sudo make install

# make v4l2loopback automatically loaded by kernel after reboot
echo "v4l2loopback" >> /etc/modules-load.d/modules.conf
echo 'options v4l2loopback video_nr=2' >> /etc/modprobe.d/v4l2loopback.conf

echo 'options v4l2loopback card_label="VirtualCam"' >> /etc/modprobe.d/v4l2loopback.conf

echo 'options v4l2loopback exclusive_caps=1' >> /etc/modprobe.d/v4l2loopback.conf

# load the loopback module into kernel now
sudo modprobe v4l2loopback video_nr=10 card_label="OBS Video Source" exclusive_caps=1

# start OBS - now there should be a new menu : 
#     Tools - V4L2 Video Output
# also you'll need to tick "Autostart" option after choosing that menu
obs &

How to run Proxmox with only a single public IP address

IPv4 address is becoming rarer by each day. In some cases, it can be pretty hard to get multiple IPv4 address for your Proxmox server.

Thankfully, Proxmox is basically a Debian Linux OS with Proxmox layer on top of that. So that gives us quite a lot of flexibility.

This tutorial will help you to create a fully functional Proxmox server running multiple containers & virtual machines, using only a single IPv4 address.

These are the main steps :

  1. Create port forwarding rules
  2. Make sure it’s executed automatically everytime the server is restarted
  3. Setup a reverse-proxy server : to forward HTTP/S requests to the correct container / virtual machine
  4. Setup HTTPS

For CT (container) / VM (virtual machine) that contains webserver, point 3 is important – because there’s only one public IP address, so there’s only one port 80 and 443 that’s facing the Internet.

By forwarding port 80 and 443 to a reverse-proxy in a CT, then we’ll be able to forward incoming visitors, by hostname / domain name, to the correct CT/VM.

1. CREATE PORT FORWARDING RULES

Modify the following to match your host’s interface name & CT/VM’s internal IP addresses, then copy-paste to terminal :

###### All HTTP/S traffic are forwarded to reverse proxy
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 80 -j DNAT --to 10.10.50.1:80

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to 10.10.50.1:443

###### SSH ports to each existing CT/VM
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22101 -j DNAT --to 10.10.50.1:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22102 -j DNAT --to 10.10.50.2:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22103 -j DNAT --to 10.10.50.3:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 22104 -j DNAT --to 10.10.50.4:22

Then we save it :

iptables-save > /etc/iptables.rules

2. EXECUTE IPTABLES AT SERVER RESTART

Edit /etc/network/interfaces file, find your network interface name that’s facing the Internet (in my case, vmbr0) – then add the pre-up line as follows :

auto vmbr0
pre-up iptables-restore < /etc/iptables.rules

3. SETUP REVERSE-PROXY

In a CT, install Nginx. Then for each domain, create a configuration file like this, for example: /etc/nginx/sites-available/www.my_website.com :

server {
listen 80;
server_name www.my_website.com;

location / {
    proxy_pass http://10.10.50.2:80;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

}

To activate it (assuming you’re using Ubuntu) link it to /etc/nginx/sites-enabled/ , then restart Nginx :

ln -s /etc/nginx/sites-available/www.my_website.com /etc/nginx/sites-enabled/www.my_website.com

/etc/init.d/nginx restart

note: as noted before, all HTTP/s traffic will have to go through this reverse-proxy. You may wish to tune this Nginx installation accordingly.

4. SETUP HTTPS

It’s very easy with Let’s Encrypt once you’ve done point 3 above. Do the following on the reverse-proxy CT :

sudo apt-get update ; sudo apt-get install -y certbot python3-certbot-nginx

sudo certbot --nginx

sudo /etc/init.d/nginx restart

Reference:

https://gist.githubusercontent.com/basoro/b522864678a70b723de970c4272547c8/raw/a985657453f72683040fbe38b1db6b1989618116/proxmox-proxy

Installing HTTrack on Ubuntu from Source

Today I needed to have the latest version of HTTrack installed to make a (static) mirror of a website that I managed

After a few attempts, this is how you compile & install HTTrack from source on Ubuntu :

wget "http://download.httrack.com/cserv.php3?File=httrack.tar.gz"

mv cserv.php3\?File\=httrack.tar.gz  httrack.tar.gz

tar xzvf httrack.tar.gz

cd httrack-3.49.2/

### the following is the key to a successful install
apt-get install zlib1g-dev libssl-dev build-essential

./configure && make && make install

Cutting a Table out of a mysqldump output file

I was restoring the backup of a MySQL 5.x server into MySQL 8.x server – and found out that it corrupt the MySQL 8.x ‘s mysql table

Which stores the usernames and passwords.

So I had to delete the mysql table from the backup, before trying to restore it again

Turn out it’s pretty easy, just will take some time since it’s a pretty big backup :

# search for beginning of 'mysql' table
cat backup.mysql | grep -n Current Database: `mysql`

# 155604:-- Current Database: `mysql`

# search for ending of 'mysql' table
tail -n +155604 backup.mysql | grep -n "Current Database"

# 1:  -- Current Database: `mysql`
# 916:-- Current Database: `phpmyadmin`

# cut that table out
head -155603 backup.mysql                > new.mysql
tail -n +$(( 155603+916 )) backup.mysql >> new.mysql

# voila !

Crontab runs on different timezone : here’s the fix

A few days ago I got reports that a server is running its cron jobs at strange times. Logged in, and indeed it was. A huge backup was running during peak hours. Saying that it disrupt people’s work is an understatement.

To my surprise, the explanation for this issue can not be found straightaway. Took some googling to find the cause. And even more time to find the correct solution.

So to cut the chase – /etc/localtime was a link to /usr/share/zoneinfo/America/NewYork

Changed it to /usr/share/zoneinfo/Asia/Jakarta – and voila, now the cronjobs are running at the correct times.

Hope it helps

XCTB – X Compression Tool Benchmarker

I deal with a lot of big files at work. While storage capacity is not infinite indeed. So it’s in my interest to keep the file sizes as low as possible.

One way to achieve that is by using compression. Especially when dealing with log files, or database archive, you can save a ton of space with the right compression tool.

But space saving is not the only consideration.

You also need to weighs in other factors. Such as :

  • File type : different tool will compress different type of file differently
  • CPU multi-core capabilities
  • Compression speed
  • Compression size
  • Decompression time

But there are so many great compression tools available in Unix / Linux. It can be really confusing to choose which one to use even for a seasoned expert.

So I created X Compression Tool Benchmarker to help with this.

Features :

  • Test any kind of file : just put the file’s name as the parameter when calling the script. Then it will be tested against all the specified compression tools.
  • Add more compression tool easily : just edit the compressor_list & ext_file variable, and that’s it
  • Fire and forget : just run the script, and forget it. It will run without needing any intervention
  • CSV output : ready to be opened with Libre Office / Excel, and made into graphs in seconds.

Here’s a sample result for a Database archive file (type MySQL dump) :

The bar chart on top of this article is based from this result.

As you can see, currently this script will benchmark the following compression tools automatically : pigz – gzip – bzip2 – pbzip2 – lrzip – rzip – zstd – pixz – plzip – xz

The result, for each different file types, may surprise you 🙂

For example ; I was surprised to see rzip beat lrzip – because lrzip is supposed to be the enhancement of rzip.

Then I was even more surprised to find out that :

  • I was testing Debian Buster’s version of rzip, which turned out to be pretty old – it does not even have multi-thread/core capability
  • But when I tested the latest version of rzip, which can use all the 16 cores in my server – it turned out to be slower than the old rzip from Debian Buster !
  • No, disk speed is not an issue – I made sure that all the benchmark was run from NVME SSD

So I was grinning at how Debian Buster packaged a very old version of rzip instead of the new one – turned out the joke’s on me : the old rzip perform better than the new one. Even without the multi-core capability.

Also it was amazing to see how really REALLY fast zstd is, while still giving decent compression size. When you absolutely need compression speed, this not so well known compression tool turned out to be the clear winner.

And so on, etc

Yes, indeed I had fun 🙂

I hope you will too. Enjoy !


UPDATE : My friend , Eko Juniarto, published his results here and have permitted me to publish it here as well – thanks. Very interesting, indeed.

BCA – daftar bank korespondensi di Amerika

Suatu hari saya ditanyakan hal ini (bank korespondensi BCA di Amerika) setelah selesai seminar di Hawaii, untuk mentransfer honorarium saya.

Ternyata info ini tidak ketemu dimana-mana.

Tanya via Call center BCA di 1500888, mereka juga tidak tahu.

Akhirnya ketika istri saya kebetulan ada perlu ke BCA, dia tanyakan sekalian. Dijawab bahwa musti saya sendiri yang datang menanyakan.

Istri saya marah besar 😀 hahahaha

Apa logikanya cuma menanya “informasi bank korespondensi BCA” dengan saya musti datang sendiri ke BCA 😀 ha ha ha

Kalau karena musti nasabah BCA – istri saya juga nasabah BCA, dia juga punya rekening di BCA.

Akhirnya customer service BCA menyerah, dan memberitahu informasi tsb, hahaha. Ada-ada saja.

Saya lampirkan informasi tsb disini. Maka moga yang membutuhkannya tidak perlu mengalami kekonyolan serupa & terbuang-buang waktunya juga.

NAMA BANK : Bank of New York
ABA ROUTING NUMBER : IRVTUS3N

NAMA BANK : Bank of America
ABA ROUTING NUMBER : BOFAUS6S

NAMA BANK : Wells Fargo Bank
ABA ROUTING NUMBER : PNBPUS3NNYC

NAMA BANK : JP Morgan Chase Bank
ABA ROUTING NUMBER : CHASUS33

NAMA BANK : Citibank
ABA ROUTING NUMBER : CITIUS33

NAMA BANK : Standard Chartered Bank
ABA ROUTING NUMBER : SCBLUS33

Instalasi w3af

w3af (Web Application Attack and Audit Framework) adalah software yang bisa Anda gunakan untuk memeriksa keamanan aplikasi / website Anda.

Cara instalasi & penggunaannya sangat mudah, silakan ikuti panduan ini :


sudo apt-get update ; sudo apt-get -y install python-pip git

git clone https://github.com/andresriancho/w3af.git
cd w3af/
./w3af_console
# install semua paket yang diminta, lalu

./tmp/w3af_dependency_install.sh

Maka kini w3af & semua paket software yang dibutuhkannya telah terpasang.

Lalu buat file bernama MyScript.w3af, dengan isi sbb :

(CATATAN : jangan gunakan dulu plugin “redos” – terakhir saya gunakan, plugin redos ini berjalan selama 2 hari dan menghabiskan disk space di server saya. Hati-hati)


# -----------------------------------------------------------------------------------------------------------
# W3AF AUDIT SCRIPT FOR WEB APPLICATION
# -----------------------------------------------------------------------------------------------------------
#Configure HTTP settings
http-settings
set timeout 30
back
#Configure scanner global behaviors
http-settings
set timeout 20
set max_requests_per_second 100
back
misc-settings
set max_discovery_time 20
set fuzz_cookies True
set fuzz_form_files True
set fuzz_url_parts True
set fuzz_url_filenames True
back
plugins
#Configure entry point (CRAWLING) scanner
crawl web_spider
crawl config web_spider
set only_forward False
set ignore_regex (?i)(logout|disconnect|signout|exit)+
back
#Configure vulnerability scanners
##Specify list of AUDIT plugins type to use
audit blind_sqli, buffer_overflow, cors_origin, csrf, eval, file_upload, ldapi, lfi, os_commanding, phishing_vector, response_splitting, sqli, xpath, xss, xst
##Customize behavior of each audit plugin when needed
audit config file_upload
set extensions jsp,php,php2,php3,php4,php5,asp,aspx,pl,cfm,rb,py,sh,ksh,csh,bat,ps,exe
back
##Specify list of GREP plugins type to use (grep plugin is a type of plugin that can find also vulnerabilities or informations disclosure)
grep analyze_cookies, click_jacking, code_disclosure, cross_domain_js, csp, directory_indexing, dom_xss, error_500, error_pages,
html_comments, objects, path_disclosure, private_ip, strange_headers, strange_http_codes, strange_parameters, strange_reason, url_session, xss_protection_header
##Specify list of INFRASTRUCTURE plugins type to use (infrastructure plugin is a type of plugin that can find informations disclosure)
infrastructure server_header, server_status, domain_dot, dot_net_errors
#Configure target authentication
#Configure reporting in order to generate an HTML report
output console, html_file
output config html_file
set output_file /tmp/W3afReport.html
set verbose False
back
output config console
set verbose False
back
back
#Set target informations, do a cleanup and run the scan
target
###### GANTI DENGAN SITUS YANG INGIN ANDA TES ###############
set target https://google.com
set target_os unix
set target_framework php
back
cleanup
start

Simpan file tersebut, lalu jalankan perintah sbb :


./w3af_console ­-s MyScript.w3af

Kini tinggal Anda tunggu sampai selesai, dan setelah itu laporannya bisa dilihat di /tmp/W3afReport.html

Enjoy !