Saturday, February 28, 2009

Cacti Monitor Server Step by Step Ubuntu

Ok so in my other posts I have given you a way to get an ISO image to get Cacti running please note that it runs on CentOS through the rPath. If you want to quickly get a cacti server up and running then download the image.

Apple Mac OS X Users - I suggest that you think of Cacti as more of an appliance and installing it on OS X Server will not only be a bit more complex but the constant upgrading of updates may break your install after you spend a lot of time going through the trouble to get things running properly. Instead I suggest that you build a VMware/Parallels image so that it is more transportable in the future. We have found traveling from tradeshow to tradeshow that this works great, back ups are easier as well. I also suggest that you look at creating this in Ubuntu as this is the next easiest server OS and everything is detailed to getting this up and running with tons of plugins.

If you want to build the cacti monitoring server from scratch then read below. Please note that this is a fresh Ubuntu 8.10 VMware image with nothing installed except for SSH server.

Update and Upgrade - I went ahead and ran the latest patches for the OS.
apt-get update
apt-get upgrade
Install AMP - (Apache MySQL PHP)
apt-get install mysql-server apache2
apt-get install php5 php5-gd php5-mysql
Now we need to install Cacti and it's components
apt-get install cacti-cactid
After you configure the information from the Cacti installer then you will need to change some of the information in Cacti after changing the password.

settings > paths > Spine Poller File Path [ /usr/sbin/spine ]

settings > poller > Poller Type [spine]

OpenSolaris Disk Utility

So I recently started on building an OpenSolaris NAS device and wanted to check out the internal HDD drives. I needed to format the drives and also get the IDs of the drives to add to ZFS pool here are some directions
iostat -En
Formatting the Solaris drives is done with this
/usr/sbin/format < /dev/null
checking the mounted partitions
df -h
I will be posting a ton of information soon on how to go from start to finish with an OpenSolaris NAS device.

Wednesday, February 25, 2009

Creating a Ruby on Rails Mailer

So this is a pretty easy task to accomplish.

From the command line switch to the directory where your app is located. Enter the following command

  script/generate mailer notifier
This will create a model called notifer.rb, a folder under the views folder called notifier, it will create other files as well but these are the main two to be concerned with.

In views/notifier is where we will be creating erb templates to be the actual email that gets sent out to the end user.

In the notifier model, you'll want to setup functions that you will call from your controller in order to send the email. Here are a couple of examples I have created for my user email functions

class UserNotification < ActionMailer::Base
  def password_reset_code(user)
    @recipients   = "#{}"
    @from         = " "
    @subject      = "Your requested a password reset."
    @sent_on      =
    @body         = { :user =< user }
  def reset_notification(user)
    @recipients   = "#{}"
    @from         = " "
    @subject      = "Your password has been reset."
    @sent_on      =
    @body         = { :user =< user }
As you can probably tell this will setup two emails one for sending a reset code the other to verify the password has been reset.

In your controller, you simply call the Model in my example its called UserNotification.

The @user is an instance variable I created a couple of lines up that is a simple query to find the current user that is logged in. Notice you always prepend the Mailer function name with "deliver_".

Back up in our UserNotification functions you see for the @body variable we assign a simple hash. The hash will contain a link to all the model data passed into it. And will make that data available in the view. To reference the data simply use the instance variable, in our case @user to gain access to the model data.

Here is an example view


Here is your password reset code:


Please copy and paste the url below into your browser in order to reset your password:{@user.password_reset_code}

Using the jQuery Cycle Plugin

Recently I had to do a small project that involved creating a learning tool - virtual flash cards. I used the jQuery plugin Cycle to provide the animation effects for the "stack" of cards and I also used another jQuery plugin called quickFlip.

Basically I created a way for the user to grab a glossary from a specific course and then I used php and mysql to loop through the results each time generating the following div tags:

<div class="quickFlip">
<!-- Front of card -->
<div class="quickFlipPanel front-card">
    <h4 class="">
       <p class="quickFlipCta" style="text-align: center; color: #ff0000;">Click here to flip the card over.</p>

<!-- Back of card -->
<div class="quickFlipPanel back-card">
    <h4 class="">
      <div style="text-align: center; width: 615px; height: 100px;">
Here is the javascript function defined for update_correct and update_incorrect:

var correct_count = 0;
var wrong_count = 0;
function update_correct(pos) {
    correct_count = correct_count + 1;
    document.getElementById("number-correct").innerHTML = correct_count;
    var t = quickFlip.flip(pos, 1, 1);

function update_incorrect(pos) {
    wrong_count = wrong_count + 1;
    document.getElementById("number-wrong").innerHTML = wrong_count;
    var t = quickFlip.flip(pos, 1, 1);

function show_results() {
    $("#num_correct").html('You got ' + correct_count + ' right.');
    $("#num_wrong").html('You got ' + wrong_count + ' wrong.');
jQuery cycle plugin - jQuery quickFlip plugin -

Use rSync to strip hidden files

Recently I was working on a project where I had some GIT repository files and also Apple HFS+ Extended attributes. I struggled for what seemed hours trying to figure out how to remove all of the .DS_File and .git. I remembered that there was rsync which I could use to EXCLUDE files. Now I am sure this could have been much cleaner and I am sure I will repost another in the futre but here are my command for. Remember to look at the files using the:
ls -la
Look for any of the files that begin with "." and here are the commands I used.
rsync -r --exclude '._*' source_folder/ destination_folder/
I was in the folder already which had the source and folder hence the lacking of the "/" before the source and destination.

Here is another script:
rsync -r --exclude '.git*' source_folder/ destination_folder/

Archive tar.gz files and folders

I am sure almost everyone know this already but when I first started working with Linux and Unix I needed help and had to have and expert show me how to do this. I even have to pull up my notes every now and again.

User this command to compress a folder using gzip and tar
tar -cvzf /destination/file_name.tar.gz /source/folder_name
Now you can un zip/tar it with this command
tar -xzf /folder/file_name.tar.gz
There are a ton of switches you can use for this. I am just doing an archive and putting it in the folder for rsync server to pick it up for archival.

Crontab script for Ruby on Rails (RoR)

Recently I had to figure out a way to get RoR scripts to run from nightly crontab scripts, it took me a little while to get it working but here is the script that we ended up using to get it working. Results may very and of course if you have problems remember that permissions are usually the first place to look, then look at the path of both the folder and using "which" to make sure cron can get to the program/application.

0 9 * * 1 cd /path/to/application_root/current/ && /usr/bin/rake email:excel RAILS_ENV=production

This will run at 12:09 on Monday each week. It is running from the current directory so that we can always have the latest version of code. Have fun and note this is on Ubuntu linux.

Editing Crontab with nano instead of vi

I know that all of the vi people are going to blast me for this but I like nano much better for quick and dirty editing. So when it comes to editing the crontab here is a quick way to change the terminal editor.
EDITOR=nano crontab -e
You should be able to substitute any editor in the place of NANO, Mac OS X has PICO which I think is the same as nano.

Crontab script for rSync

What is you could run a single script each night automatically and have the files from one server automatically copy to another for backup. Most people use a third party application for this or have copy scripts.

The point of this Rsync script is to sync the files that have changed so that we don't make more work then is needed. This will compare the files from the source to the target then once if finds the files that need to be copied it will compress the files and then copy them to the destination server. I set up the script to run on a regular basis.
0 3 * * * rsync -avz --password-file=/password_file /path/to/folder/
This will run every midnight at 12:03AM, archive mode + compress, use a password file, syncing source folder with destination folder with username.

Look for the directions on how to setup a rsync server. Also note that this should work fine on Mac OS X but I have not fully tested it on the OS.

Crontab scripts for MySQL

So if your like most web programmers you have a mysql database running your site. You need to backup your database or get a copy of the database each night. Most providers give you backups already and this is probably the script they use.

Make sure that you have a user that can do the following permissions (Select + Lock_tables + Show_view). If these minimum permission are available the mysqldump will fail.
0 1 * * * mysqldump --opt DB_NAME -u USER --password=PASSWORD > /path/to/file/DB_NAME_`date +\%m`-`date +\%d`-`date +\%Y`.dump
The nice thing about this is that it will append the data and time to the end of the file for archive. Be careful with trying to dump all of the databases at once. I have found that it is best to space the CRONTABs about 5 minutes apart each night during slow time.

Now if you want to automate this completely then check out my rsync jobs which will then copy the mysql dump to an archival system. Also check out the script that you can use to send information from one server to another via ssh.

Tuesday, February 24, 2009

MySQL Dump to another Server

When you have a development MySQL server and a production server sometimes it comes in handy to move a copy from production to development. Usually it requires a mysqldump then scp to the other server and then loading the data. Here are some scripts you can use to issue single command:

Here are the normal command that we would have had to use:

Dump the database
mysqldump --opt database_name -u root -p > database_name.dump
Copy the file
scp /path/to/file.dump
Then importing the data
mysql database_name -u root -p < database_name.dump
Now for a single command: This assumes you don't need to have SSH and the ports are open
mysqldump db-name | mysql -h db-name
This is done over SSH
mysqldump db-name | ssh mysql db-name
This should save some time and make sure you have access to run the commands on both machines.

Monday, February 23, 2009

Wildcard SSL Multiple sites under same IP

Ok so this was a bit tricky but I knew it would work and this is not for everyone out there. Remember this is a subdomain and I got a wildcart SSL from GoDaddy (look up my instructions for installing a GoDaddy certificate.

I am using Ubuntu 8.10 server for this and not many changes from the standard install. I want to support multiple sites through 443 on the same IP.

Go to /etc/apache2/ports.conf Here is what I have, your might be a bit different
# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default

NameVirtualHost *:80
Listen 80

<IfModule mod_ssl.c>
    # SSL name based virtual hosts are not yet supported, therefore no
    # NameVirtualHost statement here
    NameVirtualHost *:443
    Listen 443
Then setup the virtual host files just like normal. Here is the open close headers for this
<VirtualHost *:443>

Saturday, February 21, 2009

PHP CLI FTP Uploader

So I wanted to create a automated FTP uploader for the project I was working on. I set a CRONTAB to execute it and then hooked up a database as well.

Here is the code:
#!/usr/bin/php -q
#script used for uploading file to a server
//------ Begin Programming

$ftp_server = "";
$ftp_retries = "5";
$ftp_delay = "3";
$ftp_user = "user";
$ftp_pass = "password";

//This worked on the server
$source_directory = "/home/user/videos/flv/";
$remote_directory = "/httpdocs/movies/";

//mysql connection
define ('DB_User', 'user');
define ('DB_Password', 'password');
define ('DB_Host', 'x.x.x.x');
define ('DB_Name', 'db_name');

$dbc = mysql_connect (DB_Host, DB_User, DB_Password) OR
die ('Could not connect to MySQL Server: ' . mysql_error() );
mysql_select_db (DB_Name) or
die ('Could not select the database: ' . mysql_error() );

//do a query
$sql_query = "your query";
$result = @mysql_query ($sql_query);
$videos = mysql_fetch_array ($result, MYSQL_ASSOC);
$counter = mysql_num_rows($result);

if ($counter &gt; 0) {

//mark the as processing
$sql_query = "UPDATE SCRIPT";
$result = @mysql_query ($sql_query);

$conn_id = ftp_connect($ftp_server);

$login_result = ftp_login($conn_id, $ftp_user, $ftp_pass);

$source = $source_directory.$videos['processed_file'].".flv";
$remote = $remote_directory.$videos['processed_file'].".flv";

if (ftp_put($conn_id, $remote, $source, FTP_BINARY)) {
//            echo "successfully uploaded $source\n";
} else {
//mark file to reupload again
$sql_query = "UPDATE SCRIPT";
$result = @mysql_query ($sql_query);
//            echo "There was a problem while uploading $source to $remote\n";

 // Close connection

FFMpeg Scripts

Once you have a ffmpeg server built here are some recipes for creating output

Flash content
ffmpeg -i input_file -vcodec flv -b 666k -s 380x286 -ar 22050 -ab 24 -f flv output-file.flv
JPG image snapshot taken from the 2nd second of the video
ffmpeg -i input_file -r 1 -s 380x286 -ss 2.00 -vframes 1 -an -f image2 output-file.jpg
iPhone output
ffmpeg -i input-file -vcodec libx264 -vpre hq -vpre ipod320 -b 768k -bt 768k -s 320x240 -threads 0 -title techIT  -f mp4 -acodec libfaac -ab 128k output-file.mp4
I am sure you can setup some other settings you want for the output.

FFMpeg on Ubuntu 8.10

The goal of this project was to build a video converter for a website kind of like YouTube. Look for more posts on this and extra modules like php-cli scripts and much more.

Let's get going with a FFMpeg installation. I stated with a clean Ubuntu 8.10 Server machine with nothing but ssh server installed.

Add the EXTRA mediabuntu to the list of sources. (on one line)
wget$(lsb_release -cs).list --output-document=/etc/apt/sources.list.d/medibuntu.list
Generate the key
apt-get update && sudo apt-get install medibuntu-keyring && sudo apt-get update
Then install the codecs from mediabuntu
apt-get install w32codecs
If you have a 64-bit system then use this
apt-get install w64codecs
Install updates and upgrades
apt-get update
apt-get upgrade
Install optional plugins. (should be one line)
apt-get install libsdl1.2-dev zlib1g-dev libfaad-dev libfaac-dev  libmp3lame-dev libtheora-dev libvorbis-dev libxvidcore4-dev  libschroedinger-dev libspeex-dev libgsm1-dev
Now I recommend creating a folder to put all of the compiled software in so we can make build updates in the future.
cd /home/user/
mkdir build
cd build
You might need to install the compilers for your system here is the command for this
apt-get install build-essential
If you don't have "git" installed on your system run this
apt-get install git-core
Installing x264 from source
git clone git://
cd x264
NOTE/UPDATE - I recently had to add another piece of software to my machine to get this to work:
apt-get install yasm
Also needed to install the checkinstall application
apt-get install checkinstall
./configure --enable-shared
sudo checkinstall --fstrans=no --install=yes --pkgname=x264 --pkgversion "1:0.svn`date +%Y%m%d`-0.0ubuntu1"
sudo ldconfig
If you need to install subversion then run this command
apt-get install subversion
Great now you have a x264 encoder plugin. Now we need to build ffmpeg from scratch as well. Make sure and go back into the build folder before beginning this new set.
cd /home/user/build
svn checkout svn:// ffmpeg
cd ffmpeg
./configure --enable-gpl --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libxvid --enable-libgsm --enable-nonfree
sudo checkinstall --fstrans=no --install=yes --pkgname=ffmpeg --pkgversion "3:0.svn`date +%Y%m%d`-12ubuntu3"
Now the ffmpeg should be read to go. You can test the installation by simply typing ffmpeg at the command line. If you need to recompile the source code you simply have to run from the folders again in the future. Good luck and look for more information

Friday, February 20, 2009

CentOS 5.2 Mongrel Cluster

I had to dig these instruction up from archives bow that I switched to Ubuntu 8.10. Everything should still work perfectly fine.

Check to see if Ruby and Gems are already installed on your machine.
gem list --local
If you nothing happens then you need to install Ruby on Rails and then the gems which should be posted or will be soon. If your getting a list of gems look for mongrel or mongrel_cluster. If your not seeing the gems then you need to install them
sudo gem install mongrel
sudo gem install mongrel_cluster
Once you get them installed now you will need add a user for the service.
/usr/sbin/adduser -r mongrel
Now we need to create the folder for the mongrel config files
mkdir /etc/mongrel_cluster
Now we need to copy the mongrel_cluster information to the init.d directory so it can execute. NOTE: this line is very long so it will be on 2 lines and also please note that version change to please check or use auto complete to get the most recent version.
cp /usr/lib/ruby/gems/1.8/gems/mongrel_cluster-1.0.2/
resources/mongrel_cluster /etc/init.d/
Now we need this to be executable
chmod +x /etc/init.d/mongrel_cluster
Now let's try and start it up
/etc/init.d/mongrel_cluster start && sudo /sbin/chkconfig mongrel_cluster on
Now everything should be good to go. Here is a sample of the mongrel_cluster.yml file.
user: mongrel
cwd: /var/www/vhost/
log_file: /var/www/vhost/
port: "8000"
environment: production
group: mongrel
pid_file: /var/www/vhost/
servers: 3
You will need to create a link from the mongrel_cluster.yml file from the project/current/config directory to the /ect/mongrel_cluster folder you created above. It is going to be on two lines because it is so long.
ln -s /var/www/apps/testapp/current/config/mongrel_cluster.yml 
Now you will need to change the permissions on the folders so that the mongrel user can write logs and pids
chown -R mongrel.mongrel /var/www/vhost/APPLICATION/shared/log
chown -R mongrel.mongrel /var/www/vhost/APPLICATION/shared/pids
Now that you have everything setup now just setup apache vhost files
<IfModule mod_proxy_balancer.c>
     <Proxy "balancer://mongrel-cluster">
     ProxyPass / balancer://mongrel-cluster/
     ProxyPassReverse / balancer://mongrel-cluster/
#   ProxyPreserveHost on
I think that the best way to test this is to do both the /etc/init.d/mongrel_cluster restart and then reboot the machine to make sure it will run after a restart.

CentOS 5.2 SVN/Subversion Server

Here is a quick set of instructions for building a SVN server on Linux CentOS 5.2. I used it for a while unit I created one on Ubuntu 8.10

Check to see if SVN/Subversion is install on your box
which svn
If it is not installed then please install the application before continuing. Continue if already installed.
cd /etc/init.d/
/sbin/chkconfig --add svnserve
Now we want to make the configuration for the SVN Serve daemon so it starts when it is rebooted.
nano svnserve
Now that the editor is up paste the information below into the file.
#   /etc/rc.d/init.d/subversion
# Starts the Subversion Daemon
# chkconfig: 2345 90 10
# description: Subversion Daemon
# processname: svnserve
# pidfile: /var/lock/subsys/svnserve

source /etc/rc.d/init.d/functions

[ -x /usr/bin/svnserve ] || exit 1

### Default variables

### Read configuration
[ -r "$SYSCONFIG" ] &amp;&amp; source "$SYSCONFIG" 

arthur=" --listen-host -r /var/repositories"
desc="Subversion Daemon"

start() {
   echo -n $"Starting $desc ($prog): "
   daemon $prog -d $arthur --pid-file $pidfile
   if [ $RETVAL -eq 0 ]; then
     touch /var/lock/subsys/$prog

obtainpid() {
   pidstr=`pgrep $prog`
   pidcount=`awk -v name="$pidstr" 'BEGIN{split(name,a," "); print length(a)}'`
   if [ ! -r "$pidfile" ] &amp;&amp; [ $pidcount -ge 2 ]; then
        pid=`awk -v name="$pidstr" 'BEGIN{split(name,a," "); print a[1]}'`
        echo $prog is already running and it was not started by the init script.

stop() {
   echo -n $"Shutting down $desc ($prog): "
   if [ -r "$pidfile" ]; then
        pid=`cat $pidfile`
        kill -s 3 $pid
   [ $RETVAL -eq 0 ] &amp;&amp; success || failure
   if [ $RETVAL -eq 0 ]; then
     rm -f /var/lock/subsys/$prog
     rm -f $pidfile
   return $RETVAL

restart() {

forcestop() {
   echo -n $"Shutting down $desc ($prog): " 

   kill -s 3 $pid
   [ $RETVAL -eq 0 ] &amp;&amp; success || failure
   if [ $RETVAL -eq 0 ]; then
     rm -f /var/lock/subsys/$prog
     rm -f $pidfile

   return $RETVAL

status() {
   if [ -r "$pidfile" ]; then
        pid=`cat $pidfile`
   if [ $pid ]; then
           echo "$prog (pid $pid) is running..."
        echo "$prog is stopped"


case "$1" in
   [ -e /var/lock/subsys/$prog ] &amp;&amp; restart
   echo $"Usage: $0 {start|stop|forcestop|restart|condrestart|status}"

exit $RETVAL
Now that the code has been copied save the file. Now we need to make it executable by the OS.
chmod +x svnserve
now you should reboot to make sure you all good and the service restarts when the system comes up also you should be able to run this from the command line to start it.
/etc/init.d/svnserve start
Alternatively you can add
/etc/init.d/svnserve sto
/etc/init.d/svnserve restart

MySQL 5.0 Replication

Replication is having the data from a Master unit copy to the backup unit(s) automatically. In this demonstration I actually used a Linux MySQL to replicate to a Mac OS X Database server every 5 minutes.

Let's say you have 2 MySQL servers (Production and Read-Only Backup). How would you get the information automatically from the production unit to the backup unit.

I believe that the my.cnf is typically in the same place across all NIX OSes.
nano /etc/my.cnf
Make sure and check the location of the my.cnf file and that you have mysql installed. This will not work for MySQL 4.x version and it has been a few months since I have built thing but everything should work fine. I also recommend before you do anything that you make sure and backup your databases before you begin.

Edit the my.cnf file using either VI or Nano I always use nano even though it is not better then VI
#Example MySQL config file for very large systems.
# This is for a large system with memory of 1G-2G where the system runs mainly
# MySQL.
# You can copy this file to
# /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is /var/mysql) or
# ~/.my.cnf to set user-specific options.
# In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.

# The following options will be passed to all MySQL clients
#password       = your_password
port            = 3306
socket          = /var/mysql/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
port            = 3306
socket          = /var/mysql/mysql.sock
key_buffer = 384M
max_allowed_packet = 100M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8

# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!

# Replication Master Server (default)
# binary logging is required for replication

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id       = 2

# Replication Slave (comment out master section to use this)
# To configure this host as a replication slave, you can choose between
# two methods :
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
#    the syntax is:
#    CHANGE MASTER TO MASTER_HOST=&lt;host&gt;, MASTER_PORT=&lt;port&gt;,
#    MASTER_USER=&lt;user&gt;, MASTER_PASSWORD=&lt;password&gt; ;
#    where you replace &lt;host&gt;, &lt;user&gt;, &lt;password&gt; by quoted strings and
#    &lt;port&gt; by the master's port number (3306 by default).
#    Example:

#    MASTER_USER='slave', MASTER_PASSWORD='password';
# OR
# 2) Set the variables below. However, in case you choose this method, then
#    start replication for the first time (even unsuccessfully, for example
#    if you mistyped the password in master-password and the slave fails to
#    connect), the slave will create a file, and any later
#    change in this file to the variables' values below will be ignored and
#    overridden by the content of the file, unless you shutdown
#    the slave server, delete and restart the slaver server.
#    For that reason, you may want to leave the lines below untouched
#    (commented) and instead use CHANGE MASTER TO (see above)
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
server-id       = 2
# The replication master for this slave - required
master-host     =
# The username the slave will use for authentication when connecting
# to the master - required
master-user     =   slave
# The password the slave will authenticate with when connecting to
# the master - required
master-password =   password
# The port the master is listening on.
# optional - defaults to 3306
#master-port     =  &lt;port&gt;
# binary logging - not required for slaves, but recommended

# Point the following paths to different dedicated disks
#tmpdir         = /tmp/
#log-update     = /path-to-dedicated-directory/hostname

# Uncomment the following if you are using BDB tables
#bdb_cache_size = 384M
#bdb_max_lock = 100000

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /var/mysql/
innodb_log_group_home_dir = /var/mysql/
innodb_log_arch_dir = /var/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 100M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

#Tells the slave thread to restrict replication to the specified database. 
#To specify more than one database,use the directive multiple times, 
#once for each database.
#Hostname or IP of the slave to be reported to to the master during 
#slave registration. Will appear in the output of SHOW SLAVE HOSTS.
#Leave unset if you do not want the slave to report the name
#This username will be displayed in the output of "SHOW SLAVE HOSTS".
#This password will be displayed in the output of "SHOW SLAVE HOSTS".
#Don't cache host names.
#Don't resolve hostnames. All hostnames are IP's or 'localhost'.

#If set, allows showing user and password via SHOW SLAVE HOSTS
#on master.
#Number of seconds to wait for more data from a master/slave 
#connection before aborting the read
#Tells the slave to log the updates from the slave thread to the binary 
#log. You will need to turn it on if you plan to daisy-chain the slaves.
#The number of seconds the slave thread will sleep before retrying to
#connect to the master in case the master goes down or the connection is lost.

max_allowed_packet = 16M

# Remove the next comment character if you are not familiar with SQL

key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

Don't worry if this is a bit tricky, I will be building a new replica again in the very near future with the latest version of MySQL and detailing the entire process from beginning to end.

Windows RDP Port Change

Recently I spent hours trying to track down the connection problems with a new image. Turns out that the person wanted the MS RDP connection on another port for security reason. I spent hours trying to fiddle with every setting. Here are some quick notes on how to make the change.

Click on the Start button, then click on "Run...", Then in the box type
An explorer window will open and it might even open to the correct folder of settings if not then navigate to the following path
Terminal Server\WinStations\RDP-Tcp
From there you should see the setting for
Double click on it and then change the radio button to Decimal and change to the port that you want it to exist on.

Wednesday, February 18, 2009

Flash on Ubuntu 8.10 64-bit

So there is always a problem when you try and do 64-bit. Here is how I got the Adobe Flash running on Ubuntu 8.10 64-bit.

Run thing in the terminal.
sudo apt-get install flashplugin-nonfree

It worked for me but if not then you will have to keep looking for more directions

MediBuntu Repository for Ubuntu

Most people will add some additional repositories to their Ubuntu here are some notes on adding this but a complete list of instructions can be found at the link below

Run this command for Ubuntu 8.10 (I had to make this 2 lines but it is one line)
sudo wget --output-

Then run
sudo apt-get update && sudo apt-get install medibuntu-keyring && sudo apt-get update

You may be asked to accept this package even though it cannot be authenticated. This is normal; typing "Yes" means you trust Medibuntu.

Sun xVM VirtualBox on Ubuntu 64-bit

While there are lots of Virtual Machines like the VMware Workstation for Linux it cost money after 30 days. Another cost affective Virtual Machine by Sun Microsystems. This one is free and has a reasonably easy install. This is how I installed it. This is installed on Ubuntu 8.10 64-bit.

Get the software (hopefully it remains a stable link in the future)

Reading the VirtualBox documentation it looks like you have to APT-GET a few programs before installing and some other changes to the Permissions.
sudo apt-get install dkms

I also have some problems installing this above and had to run the following per directions from terminal
sudo apt-get install -f

Once you find the correct version you need to download it. You will notice that it is a .deb package. To install the .deb package you need to run the following command:
sudo dpkg --install virtualbox-2.1_2.1.4-42893_Ubuntu_intrepid_amd64.deb

You might find that you have some problems but I hope that it install anyway. Now add yourself to the GROUP vboxusers in the System/Administration/Users and Groups
Now run this command and see if it will check the system and everything should be good after this
/etc/init.d/vboxdrv setup

Now you might want to check the Applications Menu to see if it is completely install. Applications/System Tools.
Good luck and hope it works for you and you have fun.

VMware Workstation 6.5 on Ubuntu 64-bit

I thought some people out there might want some directions on how to install VMware Workstation on an Ubuntu 8.10 64-bit system. Also since it doesn't use a apt-get to install.

VMware Workstation 6.5 allows you to virtualize multiple operating system on your Linux desktop. The VM images should be interchangeable between windows and Mac OS X VMware Fusion 2.x.

1) Once you get the software from put it on the desktop. The open terminal and run the commands below.
sudo sh VMware-Workstation-6.5.1-126130.x86_64.bundle

The install should open up and begin to install. Once it has completed you can find the application in the Applications / System Tools

Multiple Select

I came across this nice article pointed out by a friend, Brandon Burke. It has to do with Multiple Select drop down box in a form. You can use this select box to you guessed it, select multiple items. You'll need jQuery and the asmselect js code.

Here is a link to one of many examples.
First Example

Also here is the main article.
Main Article

Monday, February 16, 2009

Mongrel on Mac OS X

So after spending about half a day trying to get a website running on Mac OS X Leopard I thought I would post a few notes that might help you in the future. Special thanks to Layton W. for his additional notes.
NOTE - these commands are very long so make sure to get it all on one line.
Basics for Starting a Mongrel on Mac OS X Leopard
sudo /usr/bin/mongrel_rails_persist start -p 8000 -e production --user admin -c /Library/WebServer/web_site

Basics for Stopping a Mongrel on Mac OS X Leopard
sudo /usr/bin/mongrel_rails_persist stop -p 8000

Log files for Mongrels
Well hope this help you to get your applications up and running on OS X Leopard.

Thursday, February 12, 2009

Pear on Mac OS X Leopard

The newest version of Snow Leopard has PEAR but the last version of Leopard (10.5.x) did not come with PEAR. As a web developer I use PHP with PEAR. This is how I got it working with just a few commands.

I found that is it easiest to navigate to the location where the files need to be like this
cd /usr

Then if it isn't already created. Make a local folder
mkdir local
cd /usr/local

Now download the PEAR from the
curl > go-pear.php
sudo php -q go-pear.php

While it was installing I left everything default not changing to typing anything. On the last part where it wanted to change the php.ini file I said no, do not alter. Below are the directions for changing it.
PHP changes
By default Mac OS X has a php.ini.default file which is best to make a copy
cd /etc
cp php.ini.default php.ini

Now edit the file.
nano php.ini

NOTE: I used the CTRL+W in nano to find the "include_path". I then added to the path.

Now you need to restart Apache. Also you can check to make sure that PEAR is working by typing the following
which pear

If you get back something then you will have successfully installed PEAR.

VMware ESXi Setup

Setting up ESXi - After setting up the server to enable the SSH using the following directions:

1. At the console of the ESXi host, press ALT-F1 to access the console window.

2. Enter unsupported in the console and then press Enter. You will not see the text you type in.

3. If you typed in unsupported correctly, you will see the Tech Support Mode warning and a password prompt. Enter the password for the root login.

4. You should then see the prompt of ~ #. Edit the file inetd.conf (enter the command vi /etc/inetd.conf).

5. Find the line that begins with #ssh and remove the #. Then save the file. If you're new to using vi, then move the cursor down to #ssh line and then press the Insert key. Move the cursor over one space and then hit backspace to delete the #. Then press ESC and type in q to save the file and exit vi. If you make a mistake, you can press the ESC key and then type it :q! to quit vi without saving the file.

6. You can either restart your host or run ps | grep inetd to determine the process ID for the inetd process. The output of the command will be something like 1299 1299 busybox inetd, and the process ID is 1299. Then run kill -HUP <proccess_id> (kill -HUP 1299 in this example) and you'll then be able to access the host via SSH.

7. log in from remote ssh root@IP_ADDRESS

Windows SNMP Configuration

I recently wanted to add my windows servers to my cacti monitoring system. Here are some directions on how to get that going.

1. Go to Start | Control Panel, and double-click the Administrative Tools applet.

2. Open the Services console, and select SNMP Service.

3. On the Agent tab, specify the types of applications that you want the server to report through SNMP by selecting the check box of each required application type.

4. On the Traps tab, specify the SNMP trap destinations to which the server will send trap notifications. (Trap destinations are the management systems that need to receive SNMP management notifications from the server.) The community name acts as a combination password and identifier, so you must specify at least one SNMP community name on the Traps tab.

5. On the Security tab, specify the hosts from which the server will accept SNMP packets, and configure the allowed actions for specific communities.

Windows 2003 DNS Secondary

I recently built a secondary DNS to replace a failing until. It was pretty straight forward but took me a little bit to figure it out. I hope this helps you get it done faster.

To setup a DNS secondary you need to make sure that the NAMESERVER and ZONE TRANSFER are setup to allow the secondary server to transfer the information. Here are some good directions.

1. Primary DNS - Open the Forward Lookup Zones on the site you wish to sync with the secondary server. Right click on the DNS entry and goto Properties. Under the properties click on the Name Server. Then add your Secondary DNS name server IP address to the list. Then goto Zone Transfers and click the checkbox and choose the radio button for Only to servers listed on the Name Servers tab. Then save the information.

2. Secondary DNS - Open DNS MMC and then Right Click Forward Lookup Zones. Click on the New Zone.... The Wizard will show up. Click Next button then choose Secondary zone, then Next. Type your Zone Exactly the same as the Primary DNS name. Then enter the IP address for the Primary DNS server and click Add. Then click Next. Then confirm the information and click Finish.

3. You might need to check the new zone on the secondary DNS but if all goes well it should show all of the information exactly the same as the other Primary DNS server.

4. Now repeat over until all DNS entries are completed.

Note: I got these directions from the following website:

Ubuntu Disk Utilities

These are directions I had to find through the web to format new disks that I install to an existing system.

This will show you all of the attached drives on the system
fdisk -l

fdisk utility

First, you will need to run the fdisk command in order to partition the disk. For this example, I only want to create one ext3 partition. Here is an example session:
fdisk /dev/hdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)


Partition number (1-4): 1
First cylinder (1-4865, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-4865, default 4865): 4865

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): 83

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.</pre>
<strong>Create ext3 File System</strong>
The next step is to create an ext3 file system on the new partition. Provided with the distribution is a script named /sbin/mkfs.ext3. Here is an example session of using the mkfs.ext3 script:
<pre>mkfs.ext3 -b 4096 /dev/hdb1

mke2fs 1.27 (8-Mar-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
4889248 inodes, 9769520 blocks
488476 blocks (5.00%) reserved for the super user
First data block=0
299 block groups
32768 blocks per group, 32768 fragments per group
16352 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mounting the File System
Now that the new drive(s) are partitioned and formatted, the last step is to mount the new drive(s). Typically on Ubuntu they will be in the /mnt directory.
I create a folder to mount the drives in the /mnt directory: mkdir nfs or mkdir right_raid

Below is the output from the /etc/fstab file note the entries I made:
# /etc/fstab: static file system information.

proc            /proc           proc    defaults        0       0
# /dev/sda1
UUID=9856bbea-a089-475f-ab29-1b976a666869 /               ext3    relatime,errors=remount-ro 0       1
# /dev/sda5
UUID=2ad62d1d-e91f-4d48-8830-c5f32c8a5c56 none            swap    sw              0       0
/dev/scd1       /media/cdrom0   udf,iso9660 user,noauto,exec,utf8 0       0
/dev/scd0       /media/cdrom1   udf,iso9660 user,noauto,exec,utf8 0       0
#Xserve RAID
#/dev/sdc1      /mnt/left_raid  ext3    defaults        0       0
#/dev/sdb1      /mnt/right_raid ext3    defaults        0       0
/dev/sdb1       /mnt/nfs        ext3    defaults        0       0
/dev/sdg1       /mnt/right_raid ext3    defaults        0       0

After making the entry in the /etc/fstab file, it is now just a matter of mounting the disk:
mount /db

df -k

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/hda3             37191660  11016692  24285724  32% /
/dev/hda1               101089     12130     83740  13% /boot
none                    515524         0    515524   0% /dev/shm
/dev/hdb1             38464340     32828  36477608   1% /db
Mount drive, You might want to create the actual mkdir you want to attach the drive to like creating a drive letter name
mount /dev/sdc1 /mnt/right_raid -t ext3

Thanks to a friend of mine for the information about CFDISK which is a semi graphical utility to partition drives.
cfdisk /dev/sdf

Ubuntu SVN/Subversion Server

I use Subversion/SVN all the time to keep track of the project as we build them. Here are some notes on how to make your own SVN server so that you can do the same.

First you might want to check and see if the SVN software has been installed using the command
which svn

If it comes back with a path then it has been correctly installed if not then run this command to install the latest version
sudo apt-get install subversion

Now that the software has been we need to create the service in the init.d folder
cd /etc/init.d/

Add the subversion SYSTEM user to the machine
useradd --system subversion

Add the repositories folder to the system. Some people say make a folder in the home directory, I like to put it into the /var directory
mkdir /var/repositories

Now we need to create the service file
nano svnserve

Paste the below text into the editor and then save it.
#!/bin/sh -e
# svnserve - brings up the svn server so anonymous users
# can access svn

# Get LSB functions
. /lib/lsb/init-functions
. /etc/default/rcS


# Check that the package is still installed
[ -x $SVNSERVE ] || exit 0;

case "$1" in
        log_begin_msg "Starting svnserve..."
        umask 002
        if start-stop-daemon --start \
        --chuid $SVN_USER:$SVN_GROUP \
        --exec $SVNSERVE \
        -- -d -r $SVN_REPO_PATH; then
            log_end_msg 0
            log_end_msg $?

        log_begin_msg "Stopping svnserve..."
        if start-stop-daemon --stop --exec $SVNSERVE; then
        log_end_msg 0
        log_end_msg $?

        "$0" stop &amp;&amp; "$0" start

    e    cho "Usage: /etc/init.d/svnserve {start|stop|restart|force-reload}"
        exit 1

exit 0

Now set the permissions on the file you just created.
sudo chmod +x svnserve

Now you can try and start the service
sudo /etc/init.d/svnserve start

Make it start when rebooting
update-rc.d svnserve defaults

Ubuntu Ruby on Rails

I am learning the ruby on rail quite rapidly so when I wanted to get rolling on Ubuntu Linux I found the following nice to haves and some stuff I always install.
apt-get install libopenssl-ruby
apt-get install ruby
apt-get install ruby-dev
apt-get install irb
apt-get install rdoc

Run this if you want Rails 1.8 installed.
apt-get install rubygems1.8

Ubuntu User/Group locations?

One day a while back I wanted to figure out where all of the accounts were list on the machine. I looked everywhere and finally came across the information and thought I better write it down.

User Account informaton is located:

Group names are kept in

Ubuntu Apache/PHP additions

I don't know about most people but usually I like to have these additional modules added to my Apache/PHP installation.

This is the main set of compilers we will need to build the packages.
apt-get install build-essential

We use the below for PEAR mail when we do mass emails and or HTML emails
apt-get install php-pear

We use the below more for Ruby on Rails applications but you never know when you want to run a SQLite DB instead of the mysql DB. also included are the header to compile the RoR.
apt-get install sqlite3
apt-get install libsqlite3-dev

We use the below again mostly for Ruby on Rails but no system is complete without XML capabilities.
apt-get install libxml2-dev

Image magick is great for generating a ton of thumbnails or converting pdfs to images for the web. Here are the libraries to get it working. You will also need this if you are going to install Ruby rmagick.
apt-get install libmagick9-dev

Updated 2010 for Ubuntu 9.x
apt-get install imagemagick
apt-get install libmagick*-dev
apt-get install libmagickwand-dev

Ubuntu VSFTP

So this one is very easy to do and works very well for most installations. It is using the VSFTP Service.
apt-get install vsftpd

Then change the following in the file /etc/vsftpd.conf

Now you need to reboot the service
/etc/init.d/vsftpd start
This works very well with user accounts on the machine.
UPDATE: I had some problems with people posting files and the permission where always read only for websites. Just add this to the CONF file and things should work better

Ubuntu MySQL 5.x

So here is a great guide to installing MySQL on Ubuntu 8.10 Server and creating a user account to access the database.

Installing the software if not already installed.
apt-get install mysql
apt-get install libmysqlclient15-dev

Make sure that set the "bind-address" in /etc/mysql/my.cnf is set to the ip address of the computer.
Once installed you will need to create the accounts in order to log in from remote.
INSERT INTO user SET Host='localhost', User='admin', Password=password('SOME_PASSWORD');
INSERT INTO user SET Host='%', User='admin', Password=password('SOME_PASSWORD');

Once inserted into the database simply upgrade the permissions.
To be honest I recommend you then restart the mysql service.
/etc/init.d/mysql restart

Now you should be good to go with creating the new databases.

Ubuntu Apache Config

While this is most likely not all finished I recently had to put together a PHP/Apache install and here are some notes on this. Extra software you are probably going to want to have [code lang="shell"]apt-get install php-pear[/code] We have a need for pdf creation for another project so I also install imagemagik essentials [code lang="shell"]apt-get install php5-curl[/code] Let's start with the locations of everything. Root as root for any of the command below.

Location of the virtual hosts files like .html, .php

vhost configuration files for Apache

Once you copy your vhost files to that location then you need to make a link to the other folder
ln -s /etc/apache2/sites-available/SITE_CONF /etc/apache2/sites-enabled/SITE_CONF

Getting the mods up and running. Like above we use the linking of the mods from one folder to another
ln -s /etc/apache2/mods-available/MODS /etc/apache2/mods-enabled/MODS

Here are the mods that I have up and running

Starting and stopping the apache2 server
/etc/init.d/apache2 start
/etc/init.d/apache2 stop
/etc/init.d/apache2 restart

Ubuntu NFS Server

Well I always seem to need these notes for setting up the NFS server for Ubuntu so here they are. Also I found NFS to be a great storage share for anything related to backing up or moving around files. I even use it for VMware and databases.

NFS Installation
Run this as root
apt-get install nfs-kernel-server nfs-common portmap

When configuring portmap do not bind loopback. If you do you can either edit /etc/default/portmap using the following:
nano /etc/default/portmap

Restart Portmap using the following command
/etc/init.d/portmap restart

NFS Server Configuration
NFS exports from a server are controlled by the file /etc/exports. Each line begins with the absolute path of a directory to be exported, followed by a space-seperated list of allowed clients.
You need to edit the exports file using the following command
nano /etc/exports

Here are the mounts for our infrastructure

Now you need to restart NFS server using the following command
/etc/init.d/nfs-kernel-server restart

If you make changes to /etc/exports on a running NFS server, you can make these changes effective by issuing the command
exportfs -a

Client Connection
This is the command you would want to run from a computer that would connect back to the NAS
mount /mnt/nas

Additional Information
If you're connection to a Linux NFS server from Mac OS X, you need to specify 'insecure' in your exports and map the user IDs since Macs use uid 501 for the first regular user. For my /etc/exports I use:
/home,async,insecure,all_squash,anonuid =1000,anongid=1000)

Ubuntu Dropping Network Connection

So when I was running 64-bit Unbuntu 8.10 I noticed that the network dropped from the machine for no reason what so ever. The machine was fine just that the network gave out. I found this little explanation and the solution which I am going to try out and see if it works.

Originally found here:

Here is the Exerpt: CRITICAL: cannot initialize libpolkit

Here is the posting:

The error is triggered by the update-modt cron job which runs ever 10 minutes.

This is a bug in Intrepid. console-kit-daemon requires PolicyKit as a dependancy, but Intrepid (Server AMD64) does not install it when it installs console-daemon-kit.

The simple fix is to install policykit.
sudo apt-get install policykit

Next run of the update-motd job and the error is gone.

Apache SSL redirect

So I have this secure shopping cart RoR project. I want people to simply navigate to the web address but when they get there I want them to automatically be moved over the the SSL version. I was looking for a simply solution one that I could very easily implement without changing much of the code of the site. Here is how I did it.
In Apache Virtual Host file on the *:80 config I have this set.
<VirtualHost *:80>
     Redirect /

This will direct any of the information coming in on the port 80 over to the 443/https portion of the website. Make sure you have the HTTP portion configured properly.

Wednesday, February 11, 2009

Ubuntu SMB/CIFS Server

I was in the process of build a NAS device and needed to enable the SMB sharing so that Windows users could drop files onto the device. I needed to figure out a way to get it working on my ubuntu 8.10 server here are my notes from the setup.
Install the service
sudo apt-get install samba

SMB Configuration
We now have to modify the SMB settings from the command line
nano /etc/samba/smb.conf

The file out of the box has almost everything turned off. I left the default information alone and added the following part to (2) sections:
Under the Global Settings area [global] add the following:
#Modified by Arthur Gressick
  security = user
  encrypt password = true
  map to guest = bad user
  guest account = nobody
  create mask = 0644
  directory mask = 0755
## done

Now under Share Definitions Paste the following
#Modified by Arthur Gressick
  comment = NAS Share
  path = /mnt/right_raid/smb
  read only = no
  browseable = yes
## done

Setting up SMB User
It is best practices to use an existing user already on the linux server. I setup a user already for rsync called rbackup (remote backup). So for this example below I am going to map the account to the existing rbackup account.
Run this from the command line
smbpasswd -a rbackup

Connecting from Windows
From Windows 2000/2003 go to Start, Run, then type the following

Ubuntu Rsyncd Server

The main purpose of this project was to build a server which could collect information from all of the outlying server and bring them to a central repository system. I like to use it for webserver so that I can have lets say 3 different flavors of linux running legacy systems. Backing those up is very cumbersome so having a CRONTAB job to automatically sync the files back to a central unit makes it much easier to restore them should anything happen to that server. Below is the information on how I created this on Ubuntu 8.10. I am sure it can be changed a bit to work on other systems.
Let install a couple of software apps on the server
apt-get install rsync
apt-get install xinetd

1. Edit /etc/default/rsync to start rsync as daemon using xinetd.
nano /etc/default/rsync

Now look for the following code below and change.

2. Create /etc/xinetd.d/rsync to launch rsync via xinetd.
nano /etc/xinetd.d/rsync

Now paste the information below:
service rsync
      disable         = no
      socket_type     = stream
      wait            = no
      user            = root
      server          = /usr/bin/rsync
      server_args     = --daemon
      log_on_failure  += USERID
3. Create /etc/rsyncd.conf configuration for rsync in daemon mode.
nano /etc/rsyncd.conf

Now paste the following information below:
ax connections = 5
log file = /var/log/rsync.log
timeout = 300

[rbackup] comment = Public Share path = /home/rbackup read only = no list = yes uid = rbackup gid = rbackup auth users = rbackup secrets file = /etc/rsyncd.secrets hosts allow =

4. Create /etc/rsyncd.secrets for user's password.
nano /etc/rsyncd.secrets

Now paste the following information below

Now change the permissions on the file
chmod 600 /etc/rsyncd.secrets

5. Start/Restart xinetd
/etc/init.d/xinetd restart

Now you can test this from another machine
rsync user@

NOTE: is you want to setup a password file you will have to create it on the source server and then change the permissions on the file in order for it to work properly.
nano password

Just enter the password in the file no extra spaces or other characters then chmod the permissions
chmod 600 password

Example of a rsync script
rsync -avz --password-file=/password /DB_Backups/ user@

There are tons of scripts you can run and also changing the flags to preserve the permissions of the files. Have fun!!


The standard installs I do for CentOS don't come with SNMP installed. I used the GUI to get the SNMP then do the following commands:
Rename the default configuration file
mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.bak

Create new file with your own SNMP configuration
vi /etc/snmp/snmpd.conf

Paste the following information into the terminal window
# snmpd.conf
# - created by the snmpconf configuration program
# SECTION: System Information Setup
# This section defines some of the information reported in
# the “system” mib group in the mibII tree.
# syslocation: The [typically physical] location of the system.
# Note that setting this value here means that when trying to
# perform an snmp SET operation to the sysLocation.0 variable will make
# the agent return the “notWritable” error code. IE, including
# this token in the snmpd.conf file will disable write access to
# the variable.
# arguments: location_string

syslocation SAVVIS
# syscontact: The contact information for the administrator
# Note that setting this value here means that when trying to
# perform an snmp SET operation to the sysContact.0 variable will make
# the agent return the “notWritable” error code. IE, including
# this token in the snmpd.conf file will disable write access to
# the variable.
# arguments: contact_string

syscontact Arthur Gressick
# SECTION: Access Control Setup
# This section defines who is allowed to talk to your running
# snmp agent.
# rocommunity: a SNMPv1/SNMPv2c read-only access community name
# arguments: community [default|hostname|network/bits] [oid]

rocommunity public
# SECTION: Agent Operating Mode
# This section defines how the agent will operate when it
# is running.
# agentaddress: The IP address and port number that the agent will listen on.
# By default the agent listens to any and all traffic from any
# interface on the default SNMP port (161). This allows you to
# specify which address, interface, transport type and port(s) that you
# want the agent to listen on. Multiple definitions of this token
# are concatenated together (using ‘:’s).
# arguments: [transport:]port[@interface/address],…

agentaddress 10.10.x.x:161
Then reload the SNMP service from the gui and if you have cacti monitoring your OS try and connect.

Ubuntu SSH Server

If you forgot to add in the ssh server so you can log in remotely then you can follow these directions for installing SSH to access your computer remotely.
sudo apt-get install openssh-server

You should be good to now log into the machine from remote login

Once in you might need to set the ROOT password which is disabled by default
sudo passwd root

Ubuntu Network Adaptor

If you have an additional network adaptor or need to change the ip address of an adaptor follow these command to change the information.

Modify: /etc/network/interfaces
auto eth0
iface eth0 inet static
        # dns-* options are implemented by the resolvconf package, if installed

Setting up the DNS service, modify: /etc/resolv.conf

Modify the hosts file in /etc/hosts          localhost subdomain
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

After this has been changed then restart the network adapter.
sudo /etc/init.d/networking restart

Ubuntu Users and Groups

I found these commands very useful when adding users and group via the command line.

If you want to add a new user to the system. This is a standard user
adduser username

Add existing user tony to ftp supplementary/secondary group with usermod command using -a option ~ i.e. add the user to the supplemental group(s). Use only with -G option
usermod -aG www-data tony

Change existing user tony primary group to www:
usermod -g www tony

Adding a group is done similarly with this command
addgroup groupname

Add user to group can also be done like this
adduser username groupname

Ubuntu update utilities

After you typically install the Ubuntu server operating system you usually want to install the latest updates. Here are some handy commands to run which will help out.

Update the system
apt-get update

apt-get upgrade

Update Manager Core
apt-get install update-manager-core

Do release update

Ubuntu Change Time Zone

When I installed Ubuntu 8.10 Server I didn't set the right time zone and needed to change it after I was finished installing the operating system. I found this handy little command for changing the timezone which can be run without a gui. You will need to run this as root or sudo.
dpkg-reconfigure tzdata

Ubuntu SNMP Configuration

So recently I want to setup a cacti server to read all of the ubuntu server and found some directions on how to install and configure the SNMP service on my Unbuntu 8.10 Server.
sudo apt-get install snmpd snmp

After you install the software you need to make a changes to some of the config files.

/etc/default/snmpd - Take out the and save the file
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/'

/etc/snmp/snmpd.conf file to the following.

#com2sec paranoid  default         public
com2sec readonly  default         public
#com2sec readwrite default         private
syslocation Location

Make sure and restart the snmp server
/etc/init.d/snmpd restart

Creating an Ubuntu CSR for Go Daddy

I had to get a SSL certificate for a web site of mine yesterday. I decided to get another SSL certificate from GoDaddy. Here are my notes for getting the certificate.

I went ahead and created a folder to make all of my work in so that I could zip it up later or have the Rsync server grab it and save it in the main infrastructure.

mkdir /root/certificate_godaddy

This will generate the randomized string create the csr.
openssl genrsa -out domain_name.key 1024

This will create the CSR for GoDaddy which will you will need to copy and paste into their site. If you are going to be requesting a Wild Card SSL make sure that the NAME is *
openssl req -new -key domain_name.key -out domain_name.csr

Now log into the GoDaddy site and paste the contents from the above CSR into their site. When you get the email back I suggest putting the 2 files they give you in the same location as you created the above. You will most likely have the following files.

With those files you will need to setup the SSL virtual host like the below:
<VirtualHost *:443>
        DocumentRoot "/home/user/vhosts/site_down"
        DirectoryIndex index.html index.php
        ErrorLog /var/log/apache2/secure_domain_name_error.log
        <IfModule mod_ssl.c>
                SSLEngine On
                SSLCertificateFile "/home/user/csr/"
                SSLCertificateKeyFile "/home/user/csr/"
                SSLCertificateChainFile "/home/user/csr/gd_bundle.crt"
        .. The rest of your config file

From here you should have no problems with the GoDaddy certificate. Hope this helps everyone.

Tuesday, February 10, 2009

Ubuntu Mongrel Installation

These directions are for creating the mongrel cluster on the Ubuntu system. Make sure you have all of the gems installed before you complete the installation. All instructions are for ROOT user or you can sudo all of the commands below. Make sure an either run as root or sudo all commands. I am going to run as root

We will need to install a compiler on the system before you continue.
apt-get install build-essential

You can then install ubuntu software for the mongrel_cluster
apt-get install mongrel-cluster

Now that it is installed we will need to configure each site to have a file located in the /etc/mongrel_cluster folder. here is an example of a configuration
user: www-data
cwd: /var/www/vhost/
log_file: /var/www/vhost/
port: "8000" 
environment: production
group: mongrel
pid_file: /var/www/vhost/
servers: 3

This will start up 3 mongrels linking to port 8000 in the proxy setup. This should be located in each of the RoR projects in the config folder you will then link each of the files to the folder using a command like this.
ln -s /var/www/vhost/testapp/current/config/mongrel_cluster.yml /etc/mongrel-cluster/sites-enabled/helpdeskapp.yml

You will also need to make sure that the mongrel user has the ability to write to the tmp/, logs/, system/ folders in each project
chown -Rh www-data:www-data /var/www/vhost/APPLICATION/shared/log
chown -Rh www-data:www-data /var/www/vhost/APPLICATION/shared/pids
chown -Rh www-data:www-data /var/www/vhost/APPLICATION/shared/system

Now you will need to change the mod_proxy part in each of the vhost files.
<IfModule mod_proxy_balancer.c>
     <Proxy "balancer://mongrel-cluster">
     ProxyPass / balancer://mongrel-cluster/
     ProxyPassReverse / balancer://mongrel-cluster/
#    ProxyPreserveHost on

Now you can proceed with the web setup and project setup.

To restart the mongrel just run the following:
/etc/init.d/mongrel_cluster restart

Raspberry Pi Zero W - Wireless Configuation

create the file under "boot" folder wpa_supplicant.conf country=GB ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev u...