Friday, March 27, 2009

ZFS sharing over NFS

These are some instructions for setting up a NFS share in OpenSolaris with ZFS commands.

My plan is to share a secondary drive installed in my PC to other computers using NFS protocol. I am using a 260 GB drive that is connected to the SATA port inside the computer. First we need to make sure that the drive is formatted properly and ZFS pool is set right.

You might want to take a look at the other ZFS references on our site before continuing.

My information: 2.5" Hitachi drive, capacity 260GB RAW, Drive ID = "c6d0"

Setup my pool called "laptop" (running as root)
zpool create laptop c6d0
Then we need a few folders on the drive just to make things nice. I am going to call it "freebie"
zfs create laptop/freebie
Now for some NFS magic
zfs set sharenfs=on laptop/freebie
Verify that the NFS share point is set properly
zfs get sharenfs laptop/freebie
Set the permissions for the folder so people can read and write to it. There are normal NIX commands
chmod 1777 /laptop/freebie
Also I want to locally go into the folder to modify the information through the GUI
zfs allow -u arthur create,mount laptop/freebie
Now you can test this by connecting. I am going to use a Mac for connecting, here is the command:
nfs://192.168.1.21:/laptop/freebie
I dropped a few files and it transferred over to the folder. They were 616KB, 1.7MB, and 120KB, this is important for the next example which is to enable compression which I am going to turn on. If you don't want any compression then you can stop this demo.

Enabling compression is quite easy but I am not sure if existing files will compress but any new files will compress as they are added and removed. Enable Compression
zfs set compression=on laptop/freebie
Disable Compression
zfs set compression=off laptop/freebie
So I now the files are 608KB, 1.4MB, 38KB respectively, going to test larger files next. That is it for this demo. I have some other projects to work on including now setting up SMB to this folder as well or another folder.

Wednesday, March 25, 2009

Red5 Open Source Flash Server on Ubuntu 8.10

I am working on a video capture video kioski so I need to either purchase thousands of dollars of Adobe Flash Media Server software or I have been looking at Red5 Open Source Flash Server which is free. Now I don't want to knock the great software Adobe has but the amount of traffic and rev up time don't allow me to go with the expensive option. So let's get going with Red5.

I got the software from: http://osflash.org/red5

Then I took my fresh install of Ubuntu 8.10 (Desktop Edition) and ran the Add/Remove Software application. I searched for JAVA and installed: OpenJDK Java 6 Web Start and the OpenJDK Java 6 Runtime.

Once I got the JAVA install I unzipped the software on the desktop and then ran the following commands in terminal: (As root/su)
tar -xzvf release.tar.gz
cd release
export RED5_HOME=`pwd`
./red5.sh
The terminal window will look locked but the server is running in the background and if you want to stop the server press the CNTRL+C and the server will quit.

Once you get the server running you can verify it is fully working by going to your local machine: http://localhost:5080/installer

I will post the red5 video recorder once I get it working with the server setup. Have fun also check out some of these really cool things you can build with Red5: http://osflash.org/red5/showcase

Monday, March 23, 2009

ffmpeg live stream to ffserver

I finally got this to work!! Here is how I did it.

Hardware: HP a1140n with firewire 6-pin and Canon zr45 with 4-pin firewire. I have everything hooked up and running Ubuntu 8.10.

Software: ffmpeg, ffserver config file, dvconnect, DV Live capture instructions

Once you have everything in place and you tested the live capture of the video then you can continue. Remember that you need to either have a tape playing in the camera or have the camera turned on to start recording for dvconnect to work. I ran through all of the examples first and made sure I could grab the video, transcode the videos to flash before I got this all to work properly.

ffserver.conf [CHANGES]:

Note that you may experience problem with the config file and the output, fear not we will get through this. I commented out the MPG streaming and will eventually give you a great mp4 config as I am still working on it. Below is the FLV code I got working and you can change the information as you see fit for the bit rates etc. Again I commented out all of the mpg stuff in the conf file.
# FLV streaming

<Stream output.flv>
Feed feed1.ffm
VideoBitRate 1024
VideoBufferSize 128
Format flv
VideoSize 352x288
VideoFrameRate 24
VideoQMin  3
VideoQMax  3
</Stream>
You can add it right after the sample scripts that is where I added it. In terminal start up the ffserver under root:
ffserver
Now for the following terminal scripts I opened 2 windows and logged in as root [su]

Terminal Window (1) - dvconnect (You should have followed the direction above and tested this already)
dvconnect -- >/tmp/fifo.rawdv
NOTE: it looks like the terminal window has been stuck but it is not, it is waiting on the video stream. Might want to have the video camera on or video running before you issue this command. Trial and error.

Terminal Window (2) - Sending the video to ffserver
ffmpeg -f dv -i /tmp/fifo.rawdv http://localhost:8090/feed1.ffm
NOTE: if you have problem with this command then either the video is not running or something weird might have happened, look at the terminal window (1) and see if it kicked out, it has to be running in order to capture the video to FIFO.

Now with your browser you should be able to see if everything is running from ffserver http://localhost:8090/stat.html. From this page you should see that a stream is waiting on someone to connect.

Watching the video from FLV in a website. I suggest you get the flowplayer from http://www.flowplayer.org. It is the best in my opinion. They have a sample page that you can hook up your FLV from this ffserver to the flowplayer and watch the live stream. Good luck.

Ubuntu 8.10 ffserver config file

I did a standard install and then used my instructions to install ffmpeg

My installed I created did not create the CONF file which was needed. I created my own conf file below.
nano /etc/ffserver.conf
Paste the below into the editor (NOTE: This is the original conf from ffmpeg, I have some other posts later)
# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
Port 8090

# Address on which the server is bound. Only useful if you have
# several network interfaces.
BindAddress 0.0.0.0

# Number of simultaneous requests that can be handled. Since FFServer
# is very fast, it is more likely that you will want to leave this high
# and use MaxBandwidth, below.
MaxClients 1000

# This the maximum amount of kbit/sec that you are prepared to
# consume when streaming to clients.
MaxBandwidth 1000

# Access log file (uses standard Apache log file format)
# '-' is the standard output.
CustomLog -

# Suppress that if you want to launch ffserver as a daemon.
NoDaemon


##################################################################
# Definition of the live feeds. Each live feed contains one video
# and/or audio sequence coming from an ffmpeg encoder or another
# ffserver. This sequence may be encoded simultaneously with several
# codecs at several resolutions.

<Feed feed1.ffm>

# You must use 'ffmpeg' to send a live feed to ffserver. In this
# example, you can type:
#
# ffmpeg http://localhost:8090/feed1.ffm

# ffserver can also do time shifting. It means that it can stream any
# previously recorded live stream. The request should contain:
# "http://xxxx?date=[YYYY-MM-DDT][[HH:]MM:]SS[.m...]".You must specify
# a path where the feed is stored on disk. You also specify the
# maximum size of the feed, where zero means unlimited. Default:
# File=/tmp/feed_name.ffm FileMaxSize=5M
File /tmp/feed1.ffm
FileMaxSize 200K

# You could specify
# ReadOnlyFile /saved/specialvideo.ffm
# This marks the file as readonly and it will not be deleted or updated.

# Specify launch in order to start ffmpeg automatically.
# First ffmpeg must be defined with an appropriate path if needed,
# after that options can follow, but avoid adding the http:// field
#Launch ffmpeg

# Only allow connections from localhost to the feed.
ACL allow 127.0.0.1

</Feed>


##################################################################
# Now you can define each stream which will be generated from the
# original audio and video stream. Each format has a filename (here
# 'test1.mpg'). FFServer will send this stream when answering a
# request containing this filename.

<Stream test1.mpg>

# coming from live feed 'feed1'
Feed feed1.ffm

# Format of the stream : you can choose among:
# mpeg       : MPEG-1 multiplexed video and audio
# mpegvideo  : only MPEG-1 video
# mp2        : MPEG-2 audio (use AudioCodec to select layer 2 and 3 codec)
# ogg        : Ogg format (Vorbis audio codec)
# rm         : RealNetworks-compatible stream. Multiplexed audio and video.
# ra         : RealNetworks-compatible stream. Audio only.
# mpjpeg     : Multipart JPEG (works with Netscape without any plugin)
# jpeg       : Generate a single JPEG image.
# asf        : ASF compatible streaming (Windows Media Player format).
# swf        : Macromedia Flash compatible stream
# avi        : AVI format (MPEG-4 video, MPEG audio sound)
# master     : special ffmpeg stream used to duplicate a server
Format mpeg

# Bitrate for the audio stream. Codecs usually support only a few
# different bitrates.
AudioBitRate 32

# Number of audio channels: 1 = mono, 2 = stereo
AudioChannels 1

# Sampling frequency for audio. When using low bitrates, you should
# lower this frequency to 22050 or 11025. The supported frequencies
# depend on the selected audio codec.
AudioSampleRate 44100

# Bitrate for the video stream
VideoBitRate 64

# Ratecontrol buffer size
VideoBufferSize 40

# Number of frames per second
VideoFrameRate 3

# Size of the video frame: WxH (default: 160x128)
# The following abbreviations are defined: sqcif, qcif, cif, 4cif
VideoSize 160x128

# Transmit only intra frames (useful for low bitrates, but kills frame rate).
#VideoIntraOnly

# If non-intra only, an intra frame is transmitted every VideoGopSize
# frames. Video synchronization can only begin at an intra frame.
VideoGopSize 12

# More MPEG-4 parameters
# VideoHighQuality
# Video4MotionVector

# Choose your codecs:
#AudioCodec mp2
#VideoCodec mpeg1video

# Suppress audio
#NoAudio

# Suppress video
#NoVideo

#VideoQMin 3
#VideoQMax 31

# Set this to the number of seconds backwards in time to start. Note that
# most players will buffer 5-10 seconds of video, and also you need to allow
# for a keyframe to appear in the data stream.
#Preroll 15

# ACL:

# You can allow ranges of addresses (or single addresses)
#ACL ALLOW <first address> <last address>

# You can deny ranges of addresses (or single addresses)
#ACL DENY <first address> <last address>

# You can repeat the ACL allow/deny as often as you like. It is on a per
# stream basis. The first match defines the action. If there are no matches,
# then the default is the inverse of the last ACL statement.
#
# Thus 'ACL allow localhost' only allows access from localhost.
# 'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
# allow everybody else.

</Stream>


##################################################################
# Example streams


# Multipart JPEG

#<Stream test.mjpg>
#Feed feed1.ffm
#Format mpjpeg
#VideoFrameRate 2
#VideoIntraOnly
#NoAudio
#Strict -1
#</Stream>


# Single JPEG

#<Stream test.jpg>
#Feed feed1.ffm
#Format jpeg
#VideoFrameRate 2
#VideoIntraOnly
##VideoSize 352x240
#NoAudio
#Strict -1
#</Stream>


# Flash

#<Stream test.swf>
#Feed feed1.ffm
#Format swf
#VideoFrameRate 2
#VideoIntraOnly
#NoAudio
#</Stream>


# ASF compatible

<Stream test.asf>
Feed feed1.ffm
Format asf
VideoFrameRate 15
VideoSize 352x240
VideoBitRate 256
VideoBufferSize 40
VideoGopSize 30
AudioBitRate 64
StartSendOnKey
</Stream>


# MP3 audio

#<Stream test.mp3>
#Feed feed1.ffm
#Format mp2
#AudioCodec mp3
#AudioBitRate 64
#AudioChannels 1
#AudioSampleRate 44100
#NoVideo
#</Stream>


# Ogg Vorbis audio

#<Stream test.ogg>
#Feed feed1.ffm
#Title "Stream title"
#AudioBitRate 64
#AudioChannels 2
#AudioSampleRate 44100
#NoVideo
#</Stream>


# Real with audio only at 32 kbits

#<Stream test.ra>
#Feed feed1.ffm
#Format rm
#AudioBitRate 32
#NoVideo
#NoAudio
#</Stream>


# Real with audio and video at 64 kbits

#<Stream test.rm>
#Feed feed1.ffm
#Format rm
#AudioBitRate 32
#VideoBitRate 128
#VideoFrameRate 25
#VideoGopSize 25
#NoAudio
#</Stream>


##################################################################
# A stream coming from a file: you only need to set the input
# filename and optionally a new format. Supported conversions:
#    AVI -> ASF

#<Stream file.rm>
#File "/usr/local/httpd/htdocs/tlive.rm"
#NoAudio
#</Stream>

#<Stream file.asf>
#File "/usr/local/httpd/htdocs/test.asf"
#NoAudio
#Author "Me"
#Copyright "Super MegaCorp"
#Title "Test stream from disk"
#Comment "Test comment"
#</Stream>


##################################################################
# RTSP examples
#
# You can access this stream with the RTSP URL:
#   rtsp://localhost:5454/test1-rtsp.mpg
#
# A non-standard RTSP redirector is also created. Its URL is:
#   http://localhost:8090/test1-rtsp.rtsp

#<Stream test1-rtsp.mpg>
#Format rtp
#File "/usr/local/httpd/htdocs/test1.mpg"
#</Stream>


##################################################################
# SDP/multicast examples
#
# If you want to send your stream in multicast, you must set the
# multicast address with MulticastAddress. The port and the TTL can
# also be set.
#
# An SDP file is automatically generated by ffserver by adding the
# 'sdp' extension to the stream name (here
# http://localhost:8090/test1-sdp.sdp). You should usually give this
# file to your player to play the stream.
#
# The 'NoLoop' option can be used to avoid looping when the stream is
# terminated.

#<Stream test1-sdp.mpg>
#Format rtp
#File "/usr/local/httpd/htdocs/test1.mpg"
#MulticastAddress 224.124.0.1
#MulticastPort 5000
#MulticastTTL 16
#NoLoop
#</Stream>


##################################################################
# Special streams

# Server status

<Stream stat.html>
Format status

# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255

#FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
</Stream>


# Redirect index.html to the appropriate site

<Redirect index.html>
URL http://www.ffmpeg.org/
</Redirect>

Ubuntu 8.10 LIVE Firewire 1394 DV Capture

Another post but this one is a bit different, I am capturing LIVE video information from a DV camera over Firewire 1394.

I am using the same equipment which is a HP a1140n with 6-pin firewire in the back connected to a Canon ZR45 video camera.

I started the camera up and also begin recording

Then I did the following things in terminal window as root
mkfifo /tmp/fifo.rawdv
dvconnect -- >/tmp/fifo.rawdv
Running the last command will stop the terminal, that is fine because it is running a process. When you stop the camera then it will stop the dvconnect and prompt will return.
I started another terminal window and begin capturing video to flash file using ffmpeg:
ffmpeg -f dv -i /tmp/fifo.rawdv -vcodec flv -b 666k -s 380x286 -ar 22050 -ab 24 -f flv /home/user_name/Desktop/output-file.flv
I then took a look at the video. You can change around the video script for the ffmpeg.

Friday, March 20, 2009

Ubuntu 8.10 Firewire 1394 DV Capture

I am using an HP A1140n with firewire (6-pin) on the back hooked to a Canon ZR-45 camcorder (4-pin). I saw on the web that there are a lot of problems so I hope that this will go more smoothly having everything in a single place.

I started with a fresh OS with nothing special at all. I did however install ffmpeg since I am going to convert video so you can either skip that section below or go ahead and install it anyways. We use it a lot and I know that it always comes in handy down the line :-)

Before I begin to install ffmpeg I suggest you install some DV codecs and get thing setup. I needed some libraries for DV to ingest I ran the following command:
apt-get install libdv-bin
Then you need to tell ubuntu 8.10 to load the video module when booting.
nano /etc/modules
Add in the "video1394" to the bottom of the list, here is a complete dump of my file.
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

fuse
lp
sbp2
video1394
It is recommended that you reboot so that the system will load everything properly. Once it reboots you can then test out the capture by typing this command:
dvconnect -v myvideo.dv
If everything went well you can then press play on the video camera and it should write a DV file to the folder you typed the command. If you want to convert the video now just use ffmpeg (there are instructions on our site.)

Tuesday, March 10, 2009

iSCSI for Mac

I have been playing around with building a NAS device and since I have some Mac computers to support I found a piece of software for the Mac to connect to iSCSI devices. It is a free application and from what I here a really good one. I hope to have a review of this soon.

http://www.studionetworksolutions.com/products/product_detail.php?pi=11

Monday, March 9, 2009

Capistrano Script for Ruby on Rails

Here is an excellent capistrano deploy script for Ruby on Rails applications. It changes the permissions and on the files and also pushes the mongrel cluster file. I changed my information so make sure you look at the script from beginning to end.

#Capistrano Recipe 2.0 for pushing the files. Developed by Arthur Gressick

set :domain, "subdomain.your_site.com"
set :user, "root"
set :application, "your_site_root"
#make sure and don't run as super user
#set :use_sudo, false

#SVN Repository information
#default_run_options[:pty] = false

set :scm, :subversion
set :repository_addy,  "svn://svn.your_repository.com/ruby_project/trunk"
set :deploy_via, :export
set :deploy_to, "/home/user/vhosts/#{application}"


set :scm_user, "username"
set :scm_password, Proc.new { Capistrano::CLI.password_prompt("SVN password for #{scm_user}, please: ") }
set :repository, Proc.new { "--username #{scm_user} --password #{scm_password} --no-auth-cache #{repository_addy}"; }

#set :scm_username, "username"
#set :scm_prefer_prompt, true
#set :scm_auth_cache, true

#DEFINE the servers that need to be updated
role :app, "ip_address"
role :web, "ip_address"
role :db,  "ip_address", :primary => true

#SET BACKUPS
set :keep_releases, 3 # keep only 3 versions of the site on the server

#DEPLOY:SETUP Script to set the permissions
desc "Setup the permissions for the project"
task :setup_permissions, :roles => :app do
 as = fetch(:runner, "root")
 
 run "chown -Rh username:username /home/username/vhosts/#{application}"
 run "chown -Rh www-data:www-data /home/username/vhosts/#{application}/shared"
end
after "deploy:setup", :setup_permissions

#DEPLOY:COLD Scripts for moving the mongrel cluster file to folder.
desc "Setup a link to the mongrel cluster folder"
task :setup_mongrel_cluster, :roles => :app do
 as = fetch(:runner, "root")
 
 run "chown -Rh username:username /home/username/vhosts/#{application}/releases/"
 run "ln -s /home/username/vhosts/#{application}/current/config/mongrel_cluster.yml /etc/mongrel-cluster/sites-enabled/#{application}.yml"
end
after "deploy:cold", :setup_mongrel_cluster

#---------------------------------------------------------------------------------------------------
#MONGREL management
namespace :deploy do

 desc "Start the server"
 task :start, :roles => :app do
  as = fetch(:runner, "root")
  
  invoke_command "/etc/init.d/mongrel-cluster start"
 end
 
 desc "Stop the server"
 task :stop, :roles => :app do
  as = fetch(:runner, "root")
  
  invoke_command "/etc/init.d/mongrel-cluster stop"
 end
 
 desc "Restart the server"
 task :restart, :roles => :app do
  as = fetch(:runner, "root")
  
  #copy out the log files.
  run "cp /home/username/vhosts/#{application}/shared/log/production.log /home/username/vhosts/#{application}/shared/log/production-bak.log"
  run "rm -rf /home/username/vhosts/#{application}/shared/log/production.log"
  
  run "chown -Rh username:username /home/username/vhosts/#{application}/releases/"
  
  run "chmod -R 777 /home/username/vhosts/#{application}/current/tmp"
  
  invoke_command "/etc/init.d/mongrel-cluster restart"
 end
end

#Run all of the clean up scripts.
after "deploy:update", "deploy:cleanup" #Clean up the directory so that there are only x working copies set the number above.

desc "After Symlink shared files path"
task :after_symlink, :roles => :app do
  #FileUtils.mkdir_p("#{shared_path}/files") unless File.exists?("#{shared_path}/files")
  run "ln -nfs #{shared_path}/files #{release_path}/public/files"
  #run "ln -nfs #{shared_path}/tmp #{release_path}/public/tmp"
end

Thursday, March 5, 2009

Tahoe-LAFS Distributed, Secure, Fault-Tolerant Filesystem

I have long been looking for a viable off-site backup solution for both my business and personal data. The business part is pretty easy. I only back up a couple hundred megabytes, and even allowing for weekly full backups + daily incrementals, I can keep a few months of backups available in a basic 4 GB volume from rsync.net. (For what it's worth, a shout-out to Duplicity here, which is what I use to manage the backups to rsync.net. It handles everything, including encryption and an rsync-like only-send-the-differences algorithm.)

Backing up my personal files, though, is an entirely different problem. Like most people, this includes a huge amount of media: over 100 GB of audio and more than 200 GB of photographs. Backing up to any of the popular online backup companies will either cost me a fortune per month or run the risk of crossing some fuzzy "acceptable use" line.

The other day I ran across the Tahoe-LAFS and this has me really excited. It is a distributed, secure and fault-tolerant filesystem. You create a node, point it at some disk space and join a grid. Everyone in the grid shares the pool of storage. Every file is broken up in many pieces and stored redundantly such than the loss of up to 70% of the nodes does not lose any data. (Redundancy and fault-tolerance values are adjustable, too, so you can tune it to your liking.)

While this doesn't solve the problem of a slow initial backup (I am limited by my own upstream bandwidth), I love the idea of not relying on a single company for access to my data, and knowing that what I store on the grid is opaque to anyone else. Disks are getting cheaper every month, so it doesn't seem too unreasonable to round up a bunch of people, buy a TB disk or two each and build a grid for everyone to use.

Tuesday, March 3, 2009

OpenSolaris ZFS Setup

I will have a completed project called OpenSolaris NAS device once I get the testing finished but for now here are some directions for setting up a ZFS file system. I am using VMware fusion for Mac and created the main system and then 4 additional 1GB drives. I set them up at SATA drives which would be like I would have in the final product.

First thing I had to do what install the software and then after I install everything I opened the terminal and begin hacking away. Here are my commands that I ran:

Get the ID for each of the attached drives.
iostat -En
Create a simple mirror of just 2 drives. (drive_id = id from command above)
zpool create your_pool_name mirror drive_id drive_id
SAMPLE of script above
zpool create your_pool_name mirror c4t1d0 c4t2d0
Check your work
zpool list
zpool status
create a folder
zfs create your_pool_name/folder_name
Adding more space to the storage pool
zpool add your_pool_name mirror drive_id drive_id
Checking all of the drives that make up the zfs pool
zpool status -v
Setting reserve limits of space
zfs set reservation=157m your_pool_name/folder
Show any of the reservations on the system
zfs get reservation your_pool_name/folder
Remove a reservation from a folder
zfs set reservation=none your_pool_name/folder
Setting Quotas
zfs set quota=3M your_pool_name/folder
Show Quotas
zfs get quota your_pool_name/folder
Removing the Quote
zfs set quota=none your_pool_name/folder
Changing Permissions (I am sure there is a ZFS one, I used Unix)
chown user:group /your_pool_name
Always check the pool to see if it is already compressed
zfs get compression your_pool_name/folder
Start compression
zfs set compression=on your_pool_name/folder
Turn off compresstion
zfs set compression=off your_pool_name/folder
Look for more articles on ZFS.

Monday, March 2, 2009

CentOS 4 Networking Config

I was having some problems setting up the network settings on a CentOS 4 system. I found a handy little command for editing the network configuration
netconfig
Don't forget to restart the network after changing
/etc/init.d/network restart

Setup of a new rails app with svn

The easiest way to setup a new repository for your rails application is to ssh into the repository host and then using terminal type as root

svnadmin create /path/to/repos/new-repo-name
Then to lock it down without using ssh and using the provided svnserve program running as a daemon on the host run this command from the newly created repository directory

vi conf/svnserve.conf
Make sure to set in the [general] section anon-access = none and auth-access = write Also make sure to uncomment password-db = passwd Then after you save those changes execute the following

vi conf/passwd
Add an entry for yourself and whomever you'll be sharing this repository with, like other developers and such. Save and quit. While in the same repo directory go ahead and make a temp directory and inside the temp directory create three directories ( trunk, tags, branches )

mkdir trunk tags branches
You'll want to import these directories into your repo.

svn import . [svn-host] -m "dir import"
You can now remove the temp directory you created and exit out of ssh. Navigate to where you would like your rails application to live on your local machine and create the rails application

rails mynewapp
cd into the new rails application and then issue a svn import to import the new rails app into your repository

svn import . [svn-host]/trunk -m "initial app import"
You may now remove the app you just created on your local machine, because we will be checking it out from the repository and your repository will have all the latest changes.

svn co [svn-host]/trunk
Now let's ignore the database.yml file, the tmp and log directories

svn propset svn:ignore database.yml config/
svn propset svn:ignore '*' tmp/
svn propset svn:ignore '*' log/
That should do it, your database.yml file, your log and tmp directory will not be checked into your repository.

CactiEZ ISO Image

I came across another complete Cacti ISO yesterday and began working with it. I was very impressed with everything that is comes with and how easy it was to get it up and running. I had to post this link for anyone who wants a complete all in one package that they can install on a VM or an old PC in the office.

CactiEZ - http://cactiez.cactiusers.org/

I would highly recommend that you take a look at this release.

Raspberry Pi Zero W - Wireless Configuation

create the file under "boot" folder wpa_supplicant.conf country=GB ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev u...