linux

Digbit – a Bit-shifted automatic domain generation for BitSquatting [python]

BitSquatting is not new but it’s relatively new. This type of attack relies on a random bit that switches from 0 to 1 or from 1 to 0 inside a RAM. Because of this even though you try to access domain like cnn.com, in fact your computer is requesting ann.com. It’s rare but it happens. It can be caused by a cosmic radiation or overheated memory modules. If You would like to learn more I can recommend Artem’s page.

To make it easier to find a domain that is in a single bit-shift distance (as in Hamming distance) I’ve created a script that is generating all the possibilities.

For example lets search for domains (bit-wise) close to cnn.com. The script output will be:

snn.com knn.com gnn.com ann.com bnn.com c.n.com cfn.com cjn.com cln.com con.com cnf.com cnj.com cnl.com cno.com cnn.som cnn.kom cnn.gom cnn.aom cnn.bom cnn.cgm cnn.ckm cnn.cmm cnn.cnm cnn.coe cnn.coi cnn.coo cnn.col

To make it easier to check if particular domain is already registered or not, I’ve made a wrapper script that will execute the python script and for each generated domain it will execute command:

> nslookup domain | grep "NXDOMAIN"

The wrapper script is executed with a single argument that is a domain name. Sample output for twitter.com:

Twitter bitsquatting

Some of those reported as available are obviously false-positive since TLDs like kom don’t really exist. I did not removed them because new TLDs are added from time to time and you might as well have a custom domain setup within your LAN.

The wrapper code:

#!/bin/bash

for i in $( ./digbit.py $1);
do
        dotcheck=$( echo "$i" | grep "\." | wc -l)
        echo -n "$i ";
        check=$(nslookup $i| grep "NXDOMAIN" | wc -l);
        if [[ $check -ne 0 && dotcheck -ne 0 ]];
        then
                echo " < available";
        else
                echo " ";
        fi;
done

The Digbit script code:

#!/usr/bin/env python3
import binascii
import re
import os
import sys

def new_text2bin(mtext):
        result = [ bin(ord(ch))[2:].zfill(8) for ch in mtext ]
        return ''.join(result)

def switch_bit(bit, my_list ):
        result_list = my_list[:]
        if result_list[bit] == '1':
                result_list[bit] = '0'
        else:
                result_list[bit] = '1'
        return result_list

def generate_similar( domain ):
        domains = []
        domain_list = list(domain)
        for i in range(len(domain)):
                print(switch_bit(i, domain_list))
        return domains

def binstr_to_ascii( binstr ):
        binstr = -len(binstr) % 8 * '0' + binstr
        string_blocks = (binstr[i:i+8] for i in range(0, len(binstr), 8))
        string = ''.join(chr(int(char, 2)) for char in string_blocks)
        return string

if len(sys.argv) != 2:
        sys.exit("Not enough args")
else:
        domain=str(sys.argv[1])

        domain_list = list(new_text2bin(domain))

        for i in range(len(domain_list)):
                new_d = (''.join(switch_bit(i, domain_list)))

                new_d_str = binstr_to_ascii(new_d)
                corect_domain = re.match("^(((([a-z0-9]+){1,63}\.)|(([a-z0-9]+(\-)+[a-z0-9]+){1,63}\.))+){1,255}$", new_d_str + "." )

                if ( corect_domain is not None and len(corect_domain.string) > 1 and (corect_domain.string).index(".") < len(corect_domain.string)-1 ):
                        print(new_d_str, end="\n")

        print(" ")

Obrazek

Advertisements

Parsing authentication log with python

This simple script is just an exercise. I’m learning Python and frankly i just like to parse text files.

Code below will parse the /var/log/auth.log file and search for failed authentication attempts. For each failed attempt it will record IP address, date, account used to authenticate and remote port used to authenticate. It will then resolve the IP to hostname, generate list of distinct accounts and ports used by particular IP address. This list will be displayed at the end of script execution. Instead of showing all ports used it shows just the range from the lowest to highest. By default only first five accounts are displayed in the table (unless list of those five is longer than 30 chars – in such case the list is truncated). If you would want to display all accounts recorded you can replace the code in line 176 from this one:

parsed_accounts     = adjust_item( five_accounts,         30 )

to this one:

parsed_accounts     = item["accounts"]

The columns NOFA and NOFP are showing number of accounts and number of ports used respectively. The date showed can be read as ‘last seen’ for particular IP address.

The example output:

auth

The script:

#!/usr/bin/env python3.4

# IMPORTS
import re
import socket
import pprint
from colorama import init, Fore, Back, Style

# VARS
log_path = '/var/log/auth.log'
hosts=[]
full_hosts_data=[]
previous_ip = ""
previous_host = ""

# ADJUSTING TO FIXED LENGTH
def adjust_item( str, i ):
	if len(str) < i:
		for j in range(i-len(str)):
			str = str + " "
	return str

# AS THE NAME SAYS
def get_hostname( ip ):
	global previous_ip
	global previous_host
	if previous_ip == ip:
		return previous_host
	else:
		try:
			new_host = socket.gethostbyaddr(ip)
			previous_ip = ip
			previous_host = new_host[0]
			return new_host[0]
		except Exception:
			new_host = ip
			previous_ip = ip
			previous_host = ip
			return new_host

# RETURNING FIRST FIVE ACCOUNTS AND NUMBER OF ALL ACCOUNTS TRIED
def first_5( parsed_string ):
	result_5 = ""
	count_all = 0
	if len( parsed_string.split("|") ) > 5 :
		index = 5
		for item in parsed_string.split("|"):
			if index > 0 and len(item) > 0:
				result_5 = result_5 + "|" + item
				index = index - 1
			if len(item) > 0:
				count_all = count_all + 1
	else:
		for item in parsed_string.split("|"):
			if len(item) > 0:
				result_5 = result_5 + "|" + item
				count_all = count_all + 1
	return (result_5, count_all )

# CHECKING PORT RANGE AND NUMBER OF PORTS WITH FAILED PASSWORDS
def port_parser( parsed_string):
	smallest = 66000
	largest = -1
	counter = 0
	for port in parsed_string.split("|"):
		if len(port) > 0:
			if int(port) < smallest:
				smallest = int(port)
			if int(port) > largest:
				largest = int(port)
			counter = counter + 1
	return( largest, smallest, counter )

def get_date( my_line ):
	date_words = my_line.split(":")
	date = date_words[0] +":"+ date_words[1] +":"+ ((date_words[2]).split(" "))[0]
	return date

def get_ports( my_line ):
	port_words = my_line.split(" port ")
	port = (port_words[1]).split(" ")
	return port[0]

def get_username( my_line ):
	username_words = my_line.split("invalid user ")
	username = (username_words[1]).split(" ")
	return username[0]

def get_username2( my_line ):
	username_words = line.split("Failed password for ")
	username = (username_words[1]).split(" ")
	return username[0]

def check_distinct(itemlist, my_item):
	item_exists = 0
	my_list = itemlist
	for i in my_list.split("|"):
		if i == my_item:
			item_exists = 1
	if item_exists == 0:
		my_list = my_list + "|" + my_item
	return my_list

# READ FILE
with open(log_path, 'rt') as log:
	text = log.read();

# COLLECTING HOSTS AND IPS
for line in text.split("\n"):
	if len(line) > 5:
		# PARSE LINE AND ADJUST FIELD LENGTH
		check_1 = line.find("cron:session")
		check_2 = line.find("Disconnecting")
		check_3 = line.find("Address")
		if check_1 == -1 and check_2 == -1 and check_3 == -1:
			break_in = line.find("POSSIBLE BREAK-IN ATTEMPT")
			if break_in != -1:
				words = line.split(" [")
				words2 = (words[1]).split("]")
				host = get_hostname( words2[0] )
				exists_check = 0
				for my_host in hosts:
					if my_host["ip"] == words2[0]:
						exists_check = 1
				if exists_check == 0:
					hosts.append({"ip":words2[0], "hostname":host})

for my_host in hosts:
	ports = ""
	accounts = ""
	date = ""

	for line in text.split("\n"):
		# CHECK LINES FOR FAILED PASS ATTEMPTS
		if line.find(my_host["ip"]) != -1 and line.find("Failed password") != -1:

			if line.find("Failed password for invalid ") != -1:
				username = get_username( line ) 				# GET USERNAME
			else:
				username = get_username2( line ) 				# GET USERNAME

			port = get_ports( line ) 							# GET PORT USED
			date = get_date( line ) 							# GET DATE
			ports = check_distinct(ports, port) 				# SAVE ONLY DISTINCT PORTS
			accounts = check_distinct(accounts, username )		# SAVE ONLY DISTINCT ACCOUNTS

	# SAVE ACCTUAL ATTEMPTS
	if len(ports) > 1:
		full_hosts_data.append({
			"ip":my_host["ip"],
			"hostname":my_host["hostname"],
			"accounts":accounts,
			"ports":ports,
			"date":date
		});

# PRINT TABLE HEADERS
print(
	adjust_item("DATE", 16 ),
	adjust_item("IP", 15),
	adjust_item("HOSTNAME", 40),
	adjust_item("ACCOUNTS", 30) + adjust_item("NOFA ", 4),
	adjust_item("PORT RANGE", 12),
	adjust_item("NOFP",5)
)

# GENERATING OUTPUT
# DATE             IP              HOSTNAME                                 ACCOUNTS                      NOFA  PORT RANGE   NOFP
# Jun  2 08:47:37  61.174.51.XXX   XXX.51.174.61.dial.XXX.dynamic.163data   root|admin                    2     2804 ->58246 30

for item in full_hosts_data:

	largest_port, smallest_port, port_count = port_parser(item["ports"])
	five_accounts, account_counter = first_5(item["accounts"])

	parsed_ip 			= adjust_item( item["ip"], 			15 )
	parsed_host 		= adjust_item( item["hostname"] , 	40 )
	parsed_accounts 	= adjust_item( five_accounts, 		30 )
	parsed_acounter 	= adjust_item( str(account_counter), 5 )
	parsed_portrange 	= adjust_item(str(smallest_port), 	 5 ) + "->" + adjust_item(str(largest_port) ,5 )
	parsed_port_count	= adjust_item( str(port_count), 	 5 )
	parsed_date 		= adjust_item( item["date"], 		16 )

	print(
		parsed_date[:16],
		parsed_ip, parsed_host[:40],
		parsed_accounts[1:30],
		parsed_acounter,
		parsed_portrange,
		parsed_port_count
	)

The code above is provided as is. I do not guarantee it will work in Your environment specifically. I’ve tested this on Debian Jessie (testing). Please use it at your own risk.
Obrazek

DNS zone transfer script

Notice. I’ve made a web-based version of this script that has more functions and an archive for successful transfers. 

Script automating discovery of name servers allowing zone transfers.

Nothing fancy. Just to make it easier.

The output:

Zone transfer discoveredIf you use the command presented on the bottom of the image above you will get results like this:

Successful zone transfer for example domain


Script:

#!/bin/bash

domains="$1"
data="";

for dnsserver in $(host -t ns "$domains" | cut -d " " -f 4);
do
        # VARIABLES
        str_len=$(echo "$dnsserver" | tr -d " " | wc -c)
        str_len=$(echo "$str_len-2"| bc )
        dns_server=$(echo "$dnsserver" | cut -b "-$str_len")
        zone_data=$(dig axfr "$1" "@$dns_server")

        # CHECKING ZONE TRANSFER
        check=$(echo "$zone_data" | grep "Transfer failed" | wc -l)

        if [[ $check -ne 0 ]];
        then
                echo -e " Transfer \033[31mFAILURE\033[00m at $dns_server"
        else
                echo -e " Transfer \033[32mSUCCESS\033[00m at $dns_server"

                # REMEMBER LAST SUCCESSFUL
                data="$zone_data";
                server="$dns_server"
        fi

done

echo ""
echo " Use command: dig axfr $1 @$server"

# UNCOMMENT THIS IF YOU WANT ZONE DATA OUTPUT
# echo "$data"

Obrazek

4chan board /hr image downloader

[ command line image download manager ]

Two decades ago, browsing the internet via 56kb modem was an agonizing experience when you’ve encountered the webpage rich with pictures. They had to be compressed and of course the compression was lossy.
Now you can download high-resolution pictures with a click of a button and wait only couple of seconds for them to be fully loaded. Bandwidth is not an issue anymore.
What >IS< the issue then? Where to get the really high-resolution pictures (above 1920×1080) on a specific and very narrow topic.
If you like (as do I) old medieval maps, NASA best space pictures, landscape photos or some old paintings that are hard to find in high-resolution, and you will not feel offended with an occasional nudity – then the /hr board at 4chan.com is the place just for you. In there you will find multiple collections of really amazing pictures compiled in a single threads just waiting for you to grab them. Yes – this is 4chan. Famous for being lightly moderated and for postings that are anonymous – as warned before, you might encounter some nudity but i guess this is a price for a top-notch pictures you would have otherwise never found.

The /hr board is a collection of multiple threads containing postings with pictures. While i really like some of them, I’m not a patient person when it comes to the downloading stuff manually by clicking on each and every one of them. Therefore, I’ve created a bash script that will download all the pictures for me automatically. It is fairly easy and it works in a three phases firstly it collects all the links to threads, secondly it parses those threads and isolates the links to images and finally it downloads those images to the directory specified.

While it is capable of downloading at full speed, I’ve limited the parsing of webpages to 10000Bps and downloading the images to 200kbps with curl and wget respectively.
I think it’s a matter of netiquette not to cause an overload for the 4chan servers.

Take a peek at how it looks when executed:

  1. Collecting links to sub-pages:

Collecting links to sub-pages of 4chan2. Collecting links to images:

collecting links to 4chan images3. Downloading images

downloading images form 4chan

 


Functions’ definitions are in separate file below.

Without further ado, here it is:

dwm.sh


#!/bin/bash

 source myfunctions.sh

 ############################
 #        VARIABLES         #
 ############################

 today=`date +%d.%m.%y`
 time=`date +%H:%M:%S`
 cols=$(tput cols)
 lines=$(tput lines)
 download_counter=0;
 curltransfer="100000B"

 margin_offset=0;
 margin_text_offset=2;

 let top_box_t=0;
 let top_box_b=0+5;
 let top_box_l=0+$margin_offset;
 let top_box_r=$cols-$margin_offset;
 let top_box_width=$cols-$margin_offset-$margin_offset

 site="http://boards.4chan.org/hr/"

 if [[ ! -d "./$today" ]]; then  mkdir ./$today; fi

 tput civis
 clear

 draw_top_box;
 draw_bottom_box;
 scrap_links;

 cat links.4chan | sort -u >> uniquelinks.4chan
 if [[ -e links.4chan ]]; then rm links.4chan; fi
 cat uniquelinks.4chan | grep -v "board" | cut -d "#" -f 1 > tmp.4chan
 rm uniquelinks.4chan
 cat tmp.4chan | sort -u >> uniquelinks.4chan
 rm tmp.4chan

 scrap_images;

 cat images.4chan | sort -u >> uniqueimages.4chan
 rm images.4chan

 draw_panel;
 draw_headers;

 download_images;

 tput cup 20 0;

 tput cnorm

And required functions file:
myfunctions.sh

#!/bin/bash

check_image()
{
        echo "1:" $1
        echo "2:" $2
}

draw_headers()
{
        tput cup $(($top_box_t+2)) $(($top_box_l+2));
        echo -en "$EINS\033[1;30m\033[40m#\033[0m";
        tput cup $(($top_box_t+2)) $(($top_box_l+6));
        echo -en "$EINS\033[1;30m\033[40mFILE NAME\033[0m";
}

download_images()
{

        let scroll_lines=$lines-6
        top=4

        tput cup $top 0
        scrolled_lines=1;
        allfiles=`cat uniqueimages.4chan | wc -l`

        index=0

        for i in `cat uniqueimages.4chan`
        do

                filename=`echo $i | cut -d "/" -f 6`
                if [[ $((index%$scroll_lines)) -eq 0 ]];
                then
                        tput cup $top 0
                        for ((j=0; j<$scroll_lines; j++))
                        do
                                echo -e "$EINS\033[32m\033[40m                                                                          \033[0m";
                        done
                        tput cup $top 0
                fi

                echo -ne "\033[s"

#               if [[ $index -gt 999 ]];
#               then
#                       tput cup $top 0
#                        for ((j=0; j<$scroll_lines; j++))
#                        do
#                                echo -e "$EINS\033[32m\033[40m                                                                          \033[0m";
#                        done
#                        tput cup $top 0
#                       let index=1
#                       echo -e "   $index  $EINS\033[30m\033[47mhttp:$i\033[0m"
#               fi
                if [[ $index -lt 10 ]];
                then
                        echo -e "   $index  $EINS\033[30m\033[47mhttp:$i\033[0m"
                elif [[ $index -lt 100 && $index -gt 9 ]];
                then
                        echo -e "  $index  $EINS\033[30m\033[47mhttp:$i\033[0m"
                elif [[ $index -lt 1000 && $index -gt 99 ]];
                then
                        echo -e " $index  $EINS\033[30m\033[47mhttp:$i\033[0m"
                elif [[ $index -gt 999 ]];
                then
                        tput cup $top 0
                        for ((j=0; j<$scroll_lines; j++))
                        do
                                echo -e "$EINS\033[32m\033[40m                                                                          \033[0m";
                        done
                        tput cup $top 0
                        echo -ne "\033[s"
                        let index=1
                        echo -e "   $index  $EINS\033[30m\033[47mhttp:$i\033[0m"
                fi

                #DOWNLOADING HERE
                color=1
                size=0
                download_check=`cat ./4chan_download.log | grep $filename | wc -l`
                if [[ $download_check -eq 0 ]];
                then
                        let color=1
                        wget -q --limit-rate=200k -P ./$today http:$i
                        size=`ls -hls 24.09.12/$filename | cut -d " " -f 6`
                        #ls -hls 24.09.12/$filename | cut -d " " -f 5
                        let download_counter=$download_counter+1
                else
                        let color=2
                fi

                echo -ne "\033[u"
                if [[ $index -lt 10 ]];
                then
                        echo -en "   $index  $EINS\033[m\033[40mhttp:$i\033[0m"
                        if [[ $color -eq 1 ]];
                        then
                                echo -e  "\t[$EINS\033[32m\033[40m+\033[0m]"
                        else
                                echo -e  "\t[$EINS\033[33m\033[40m*\033[0m]"
                        fi
                elif [[ $index -lt 100 && $index -gt 9 ]];
                then
                        echo -en "  $index  $EINS\033[m\033[40mhttp:$i\033[0m"
                        if [[ $color -eq 1 ]];
                        then
                                echo -e  "\t[$EINS\033[32m\033[40m+\033[0m]"
                        else
                                echo -e  "\t[$EINS\033[33m\033[40m*\033[0m]"
                        fi
                elif [[ $index -lt 1000 && $index -gt 99 ]];
                then
                        echo -en " $index  $EINS\033[m\033[40mhttp:$i\033[0m"
                        if [[ $color -eq 1 ]];
                        then
                                echo -e  "\t[$EINS\033[32m\033[40m+\033[0m]"
                        else
                                echo -e  "\t[$EINS\033[33m\033[40m*\033[0m]"
                        fi
                fi

                let index=$index+1

                echo -ne "\033[s";
                #draw_bottom_box;
                tput cup $top_box_t $(($top_box_l+20));
                echo -en "$EINS\033[30m\033[47mDOWNLOADED $download_counter/$allfiles\033[0m";
                echo -ne "\033[u";
        done
}

scrap_images()
{
 tput cup $(($top_box_t+5)) $(($top_box_l+1));
 echo -en "$EINS\033[1;30m\033[40mSCRAPING IMAGES\033[0m";
 tput cup $(($top_box_t+5)) $(($top_box_l+20));
 echo -en "$EINS\033[1;30m\033[40m[\033[0m";
 tput cup $(($top_box_t+5)) $(($top_box_l+36));
 echo -en "$EINS\033[1;30m\033[40m]\033[0m";

 urls=`cat uniquelinks.4chan | wc -l`
 index=0;

 position=21
 for i in `cat uniquelinks.4chan`;
 do

        let index=$index+1
        tput cup $top_box_t $(($top_box_l+20));
        echo -en "$EINS\033[30m\033[47mSCRAPED $index/$urls\033[0m";

        #HERE GOES THE CODE FOR images/# SCRAPPING
        let left=$position
        tput cup $(($top_box_t+5)) $(($top_box_l+$left));
        echo -en "$EINS\033[1;30m\033[40m-\033[0m";

        curl -s --limit-rate 10000B http://boards.4chan.org/hr/$i | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' |  sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e  '/^$/ d' | grep "images" | uniq >> images.4chan

        let position=$position+1
        if [[ $position -eq 36 ]];
        then
                tput cup $(($top_box_t+5)) $(($top_box_l+36));
                echo -en "$EINS\033[1;30m\033[40m]\033[0m";
                let position=21;
                tput cup $(($top_box_t+5)) $((1+$position));
                echo -en "$EINS\033[1;30m\033[40m              \033[0m";
        fi

 done

#CLEAN PROGRESS BAR
 for i in {1..14};
 do
        let left=$((19+$i))
        tput cup $(($top_box_t+5)) $(($top_box_l+$left));
        echo -en "$EINS\033[1;30m\033[40m \033[0m";
 done

#MARK AS COMPLETE
 tput cup $(($top_box_t+5)) $(($top_box_l+20+14));
 echo -en "$EINS\033[1;30m\033[40m[\033[0m";
 echo -en "$EINS\033[32m\033[40m+\033[0m";
 echo -en "$EINS\033[1;30m\033[40m]\033[0m";

#CLEAN COUNTER
 tput cup $top_box_t $(($top_box_l+20));
 echo -en "$EINS\033[30m\033[47m                                \033[0m";

}

scrap_links()
{
 if [[ -e links.4chan ]];
 then
        rm links.4chan;
 fi
 tput cup $(($top_box_t+4)) $(($top_box_l+1));
 echo -en "$EINS\033[1;30m\033[40mSCRAPING LINKS\033[0m";
 tput cup $(($top_box_t+4)) $(($top_box_l+20));
 echo -en "$EINS\033[1;30m\033[40m[\033[0m";
 tput cup $(($top_box_t+4)) $(($top_box_l+36));
 echo -en "$EINS\033[1;30m\033[40m]\033[0m";

#CLEAN OUTPUT FILE
        if [[ -e links.4chan ]]; then rm links.4chan; fi

#SCRAP THE FIST PAGE
 curl -s  --limit-rate $curltransfer http://boards.4chan.org/hr/ | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' |  sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d' | grep "res/" | sort -u >> links.4chan

#SCRAP REST
 for i in {1..15};
 do
        let left=$((20+$i))
        tput cup $(($top_box_t+4)) $(($top_box_l+$left));
        echo -en "$EINS\033[1;30m\033[40m-\033[0m";
        curl -s  --limit-rate $curltransfer http://boards.4chan.org/hr/$i | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' |  sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d' | grep "res/" | sort -u  >> links.4chan
 done

#CLEAN PROGRESS BAR
 for i in {1..14};
 do
        let left=$((19+$i))
        tput cup $(($top_box_t+4)) $(($top_box_l+$left));
        echo -en "$EINS\033[1;30m\033[40m \033[0m";
 done

#MARK AS COMPLETE
 tput cup $(($top_box_t+4)) $(($top_box_l+20+14));
 echo -en "$EINS\033[1;30m\033[40m[\033[0m";
 echo -en "$EINS\033[32m\033[40m+\033[0m";
 echo -en "$EINS\033[1;30m\033[40m]\033[0m";
}

function draw_top_box()
{
 for (( i=0; i<$top_box_width; i++ ))
 do
        let left=$top_box_l+$i;
        tput cup $top_box_t $left;
        echo -en "$EINS\033[30m\033[47m \033[0m";
 done

 tput cup $top_box_t $(($top_box_l+2));
 echo -en "$EINS\033[30m\033[47mDWM v.1.0\033[0m";
 tput cup $top_box_t $(($cols-20));
 echo -en "$EINS\033[30m\033[47m$time | $today\033[0m";

 tput cup $lines $cups;

}

function draw_bottom_box()
{
 tput cup $lines $cups;
 for (( i=0; i<$top_box_width; i++ ))
 do
        let left=$top_box_l+$i;
        tput cup $cols $left;
        echo -en "$EINS\033[30m\033[47m \033[0m";
 done

 tput cup $lines $cups;
 echo -en "$EINS\033[30m\033[47m  DOWNLOADED FILES: $download_counter\033[0m";
}

function draw_panel()
{
 for (( i=0; i<$top_box_width; i++ ))
 do
        let left=$top_box_l+$i;
        tput cup $(($top_box_t+1)) $left;
        echo -en "$EINS\033[1;30m\033[40m-\033[0m";
 done

 for (( i=0; i<$top_box_width; i++ ))
 do
        let left=$top_box_l+$i;
        tput cup $(($top_box_t+3)) $left;
        echo -en "$EINS\033[1;30m\033[40m-\033[0m";
 done

 tput cup $(($top_box_t+2)) $top_box_l;
 echo -en "$EINS\033[1;30m\033[40m|\033[0m";
 tput cup $(($top_box_t+2)) $top_box_r;
 echo -en "$EINS\033[1;30m\033[40m|\033[0m";
 tput cup $(($top_box_t+2)) $(($top_box_l+4));
 echo -en "$EINS\033[1;30m\033[40m|\033[0m";

 tput cup $(($top_box_t+1)) $top_box_l;
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";
 tput cup $(($top_box_t+1)) $top_box_r;
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";

 tput cup $(($top_box_t+3)) $top_box_l;
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";
 tput cup $(($top_box_t+3)) $top_box_r;
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";

 tput cup $(($top_box_t+3)) $(($top_box_l+4));
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";

 tput cup $(($top_box_t+1)) $(($top_box_l+4));
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";

 tput cup $(($top_box_t+1)) $(($top_box_r-5));
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";

 tput cup $(($top_box_t+3)) $(($top_box_r-5));
 echo -en "$EINS\033[1;30m\033[40m+\033[0m";

 tput cup $(($top_box_t+2)) $(($top_box_r-5));
 echo -en "$EINS\033[1;30m\033[40m|\033[0m";

}

Be aware that all scripts are run at your own risk and while every script has been written with the intention of minimising the potential for unintended consequences, the owners, hosting providers and contributors cannot be held responsible for any misuse or script problems.

Obrazek