Tuesday, March 31, 2009

Handy Sed oneliners

I've lost this copy of my sed onelines sometime ago and I'm glad I was able to come across a copy of it again. This set of one liners was originally compiled byEric Pement (pemente[at]northpark[dot]edu).

If you are just beginning with SED of a veteran it this SED oneliners reference will always be handy.


FILE SPACING:

# double space a file
sed G

# double space a file which already has blank lines in it. Output file
# should contain no more than one blank line between lines of text.
sed '/^$/d;G'

# triple space a file
sed 'G;G'

# undo double-spacing (assumes even-numbered lines are always blank)
sed 'n;d'

# insert a blank line above every line which matches "regex"
sed '/regex/{x;p;x;}'

# insert a blank line below every line which matches "regex"
sed '/regex/G'

# insert a blank line above and below every line which matches "regex"
sed '/regex/{x;p;x;G;}'

NUMBERING:

# number each line of a file (simple left alignment). Using a tab (see
# note on '\t' at end of file) instead of space will preserve margins.
sed = filename | sed 'N;s/\n/\t/'

# number each line of a file (number on left, right-aligned)
sed = filename | sed 'N; s/^/ /; s/ *\(.\{6,\}\)\n/\1 /'

# number each line of file, but only print numbers if line is not blank
sed '/./=' filename | sed '/./N; s/\n/ /'

# count lines (emulates "wc -l")
sed -n '$='

TEXT CONVERSION AND SUBSTITUTION:

# IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
sed 's/.$//' # assumes that all lines end with CR/LF
sed 's/^M$//' # in bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//' # gsed 3.02.80, but top script is easier

# IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format
sed "s/$/`echo -e \\\r`/" # command line under ksh
sed 's/$'"/`echo \\\r`/" # command line under bash
sed "s/$/`echo \\\r`/" # command line under zsh
sed 's/$/\r/' # gsed 3.02.80

# IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format
sed "s/$//" # method 1
sed -n p # method 2

# IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
# Can only be done with UnxUtils sed, version 4.0.7 or higher.
# Cannot be done with other DOS versions of sed. Use "tr" instead.
sed "s/\r//" infile >outfile # UnxUtils sed v4.0.7 or higher
tr -d \r outfile # GNU tr version 1.22 or higher

# delete leading whitespace (spaces, tabs) from front of each line
# aligns all text flush left
sed 's/^[ \t]*//' # see note on '\t' at end of file

# delete trailing whitespace (spaces, tabs) from end of each line
sed 's/[ \t]*$//' # see note on '\t' at end of file

# delete BOTH leading and trailing whitespace from each line
sed 's/^[ \t]*//;s/[ \t]*$//'

# insert 5 blank spaces at beginning of each line (make page offset)
sed 's/^/ /'

# align all text flush right on a 79-column width
sed -e :a -e 's/^.\{1,78\}$/ &/;ta' # set at 78 plus 1 space

# center all text in the middle of 79-column width. In method 1,
# spaces at the beginning of the line are significant, and trailing
# spaces are appended at the end of the line. In method 2, spaces at
# the beginning of the line are discarded in centering the line, and
# no trailing spaces appear at the end of lines.
sed -e :a -e 's/^.\{1,77\}$/ & /;ta' # method 1
sed -e :a -e 's/^.\{1,77\}$/ &/;ta' -e 's/\( *\)\1/\1/' # method 2

# substitute (find and replace) "foo" with "bar" on each line
sed 's/foo/bar/' # replaces only 1st instance in a line
sed 's/foo/bar/4' # replaces only 4th instance in a line
sed 's/foo/bar/g' # replaces ALL instances in a line
sed 's/\(.*\)foo\(.*foo\)/\1bar\2/' # replace the next-to-last case
sed 's/\(.*\)foo/\1bar/' # replace only the last case

# substitute "foo" with "bar" ONLY for lines which contain "baz"
sed '/baz/s/foo/bar/g'

# substitute "foo" with "bar" EXCEPT for lines which contain "baz"
sed '/baz/!s/foo/bar/g'

# change "scarlet" or "ruby" or "puce" to "red"
sed 's/scarlet/red/g;s/ruby/red/g;s/puce/red/g' # most seds
gsed 's/scarlet\|ruby\|puce/red/g' # GNU sed only

# reverse order of lines (emulates "tac")
# bug/feature in HHsed v1.5 causes blank lines to be deleted
sed '1!G;h;$!d' # method 1
sed -n '1!G;h;$p' # method 2

# reverse each character on the line (emulates "rev")
sed '/\n/!G;s/\(.\)\(.*\n\)/&\2\1/;//D;s/.//'

# join pairs of lines side-by-side (like "paste")
sed '$!N;s/\n/ /'

# if a line ends with a backslash, append the next line to it
sed -e :a -e '/\\$/N; s/\\\n//; ta'

# if a line begins with an equal sign, append it to the previous line
# and replace the "=" with a single space
sed -e :a -e '$!N;s/\n=/ /;ta' -e 'P;D'

# add commas to numeric strings, changing "1234567" to "1,234,567"
gsed ':a;s/\B[0-9]\{3\}\>/,&/;ta' # GNU sed
sed -e :a -e 's/\(.*[0-9]\)\([0-9]\{3\}\)/\1,\2/;ta' # other seds

# add commas to numbers with decimal points and minus signs (GNU sed)
gsed ':a;s/\(^\|[^0-9.]\)\([0-9]\+\)\([0-9]\{3\}\)/\1\2,\3/g;ta'

# add a blank line every 5 lines (after lines 5, 10, 15, 20, etc.)
gsed '0~5G' # GNU sed only
sed 'n;n;n;n;G;' # other seds

SELECTIVE PRINTING OF CERTAIN LINES:

# print first 10 lines of file (emulates behavior of "head")
sed 10q

# print first line of file (emulates "head -1")
sed q

# print the last 10 lines of a file (emulates "tail")
sed -e :a -e '$q;N;11,$D;ba'

# print the last 2 lines of a file (emulates "tail -2")
sed '$!N;$!D'

# print the last line of a file (emulates "tail -1")
sed '$!d' # method 1
sed -n '$p' # method 2

# print only lines which match regular expression (emulates "grep")
sed -n '/regexp/p' # method 1
sed '/regexp/!d' # method 2

# print only lines which do NOT match regexp (emulates "grep -v")
sed -n '/regexp/!p' # method 1, corresponds to above
sed '/regexp/d' # method 2, simpler syntax

# print the line immediately before a regexp, but not the line
# containing the regexp
sed -n '/regexp/{g;1!p;};h'

# print the line immediately after a regexp, but not the line
# containing the regexp
sed -n '/regexp/{n;p;}'

# print 1 line of context before and after regexp, with line number
# indicating where the regexp occurred (similar to "grep -A1 -B1")
sed -n -e '/regexp/{=;x;1!p;g;$!N;p;D;}' -e h

# grep for AAA and BBB and CCC (in any order)
sed '/AAA/!d; /BBB/!d; /CCC/!d'

# grep for AAA and BBB and CCC (in that order)
sed '/AAA.*BBB.*CCC/!d'

# grep for AAA or BBB or CCC (emulates "egrep")
sed -e '/AAA/b' -e '/BBB/b' -e '/CCC/b' -e d # most seds
gsed '/AAA\|BBB\|CCC/!d' # GNU sed only

# print paragraph if it contains AAA (blank lines separate paragraphs)
# HHsed v1.5 must insert a 'G;' after 'x;' in the next 3 scripts below
sed -e '/./{H;$!d;}' -e 'x;/AAA/!d;'

# print paragraph if it contains AAA and BBB and CCC (in any order)
sed -e '/./{H;$!d;}' -e 'x;/AAA/!d;/BBB/!d;/CCC/!d'

# print paragraph if it contains AAA or BBB or CCC
sed -e '/./{H;$!d;}' -e 'x;/AAA/b' -e '/BBB/b' -e '/CCC/b' -e d
gsed '/./{H;$!d;};x;/AAA\|BBB\|CCC/b;d' # GNU sed only

# print only lines of 65 characters or longer
sed -n '/^.\{65\}/p'

# print only lines of less than 65 characters
sed -n '/^.\{65\}/!p' # method 1, corresponds to above
sed '/^.\{65\}/d' # method 2, simpler syntax

# print section of file from regular expression to end of file
sed -n '/regexp/,$p'

# print section of file based on line numbers (lines 8-12, inclusive)
sed -n '8,12p' # method 1
sed '8,12!d' # method 2

# print line number 52
sed -n '52p' # method 1
sed '52!d' # method 2
sed '52q;d' # method 3, efficient on large files

# beginning at line 3, print every 7th line
gsed -n '3~7p' # GNU sed only
sed -n '3,${p;n;n;n;n;n;n;}' # other seds

# print section of file between two regular expressions (inclusive)
sed -n '/Iowa/,/Montana/p' # case sensitive

SELECTIVE DELETION OF CERTAIN LINES:

# print all of file EXCEPT section between 2 regular expressions
sed '/Iowa/,/Montana/d'

# delete duplicate, consecutive lines from a file (emulates "uniq").
# First line in a set of duplicate lines is kept, rest are deleted.
sed '$!N; /^\(.*\)\n\1$/!P; D'

# delete duplicate, nonconsecutive lines from a file. Beware not to
# overflow the buffer size of the hold space, or else use GNU sed.
sed -n 'G; s/\n/&&/; /^\([ -~]*\n\).*\n\1/d; s/\n//; h; P'

# delete all lines except duplicate lines (emulates "uniq -d").
sed '$!N; s/^\(.*\)\n\1$/\1/; t; D'

# delete the first 10 lines of a file
sed '1,10d'

# delete the last line of a file
sed '$d'

# delete the last 2 lines of a file
sed 'N;$!P;$!D;$d'

# delete the last 10 lines of a file
sed -e :a -e '$d;N;2,10ba' -e 'P;D' # method 1
sed -n -e :a -e '1,10!{P;N;D;};N;ba' # method 2

# delete every 8th line
gsed '0~8d' # GNU sed only
sed 'n;n;n;n;n;n;n;d;' # other seds

# delete ALL blank lines from a file (same as "grep '.' ")
sed '/^$/d' # method 1
sed '/./!d' # method 2

# delete all CONSECUTIVE blank lines from file except the first; also
# deletes all blank lines from top and end of file (emulates "cat -s")
sed '/./,/^$/!d' # method 1, allows 0 blanks at top, 1 at EOF
sed '/^$/N;/\n$/D' # method 2, allows 1 blank at top, 0 at EOF

# delete all CONSECUTIVE blank lines from file except the first 2:
sed '/^$/N;/\n$/N;//D'

# delete all leading blank lines at top of file
sed '/./,$!d'

# delete all trailing blank lines at end of file
sed -e :a -e '/^\n*$/{$d;N;ba' -e '}' # works on all seds
sed -e :a -e '/^\n*$/N;/\n$/ba' # ditto, except for gsed 3.02*

# delete the last line of each paragraph
sed -n '/^$/{p;h;};/./{x;/./p;}'

SPECIAL APPLICATIONS:

# remove nroff overstrikes (char, backspace) from man pages. The 'echo'
# command may need an -e switch if you use Unix System V or bash shell.
sed "s/.`echo \\\b`//g" # double quotes required for Unix environment
sed 's/.^H//g' # in bash/tcsh, press Ctrl-V and then Ctrl-H
sed 's/.\x08//g' # hex expression for sed v1.5

# get Usenet/e-mail message header
sed '/^$/q' # deletes everything after first blank line

# get Usenet/e-mail message body
sed '1,/^$/d' # deletes everything up to first blank line

# get Subject header, but remove initial "Subject: " portion
sed '/^Subject: */!d; s///;q'

# get return address header
sed '/^Reply-To:/q; /^From:/h; /./d;g;q'

# parse out the address proper. Pulls out the e-mail address by itself
# from the 1-line return address header (see preceding script)
sed 's/ *(.*)//; s/>.*//; s/.*[:<] *//' # add a leading angle bracket and space to each line (quote a message) sed 's/^/> /'

# delete leading angle bracket & space from each line (unquote a message)
sed 's/^> //'

# remove most HTML tags (accommodates multiple-line tags)
sed -e :a -e 's/<[^>]*>//g;/
zipup.bat
dir /b *.txt | sed "s/^\(.*\)\.TXT/pkzip -mo \1 \1.TXT/" >>zipup.bat

TYPICAL USE: Sed takes one or more editing commands and applies all of
them, in sequence, to each line of input. After all the commands have
been applied to the first input line, that line is output and a second
input line is taken for processing, and the cycle repeats. The
preceding examples assume that input comes from the standard input
device (i.e, the console, normally this will be piped input). One or
more filenames can be appended to the command line if the input does
not come from stdin. Output is sent to stdout (the screen). Thus:

cat filename | sed '10q' # uses piped input
sed '10q' filename # same effect, avoids a useless "cat"
sed '10q' filename > newfile # redirects output to disk

For additional syntax instructions, including the way to apply editing
commands from a disk file instead of the command line, consult "sed &
awk, 2nd Edition," by Dale Dougherty and Arnold Robbins (O'Reilly,
1997; http://www.ora.com), "UNIX Text Processing," by Dale Dougherty
and Tim O'Reilly (Hayden Books, 1987) or the tutorials by Mike Arst
distributed in U-SEDIT2.ZIP (many sites). To fully exploit the power
of sed, one must understand "regular expressions." For this, see
"Mastering Regular Expressions" by Jeffrey Friedl (O'Reilly, 1997).
The manual ("man") pages on Unix systems may be helpful (try "man
sed", "man regexp", or the subsection on regular expressions in "man
ed"), but man pages are notoriously difficult. They are not written to
teach sed use or regexps to first-time users, but as a reference text
for those already acquainted with these tools.

QUOTING SYNTAX: The preceding examples use single quotes ('...')
instead of double quotes ("...") to enclose editing commands, since
sed is typically used on a Unix platform. Single quotes prevent the
Unix shell from intrepreting the dollar sign ($) and backquotes
(`...`), which are expanded by the shell if they are enclosed in
double quotes. Users of the "csh" shell and derivatives will also need
to quote the exclamation mark (!) with the backslash (i.e., \!) to
properly run the examples listed above, even within single quotes.
Versions of sed written for DOS invariably require double quotes
("...") instead of single quotes to enclose editing commands.

USE OF '\t' IN SED SCRIPTS: For clarity in documentation, we have used
the expression '\t' to indicate a tab character (0x09) in the scripts.
However, most versions of sed do not recognize the '\t' abbreviation,
so when typing these scripts from the command line, you should press
the TAB key instead. '\t' is supported as a regular expression
metacharacter in awk, perl, and HHsed, sedmod, and GNU sed v3.02.80.

VERSIONS OF SED: Versions of sed do differ, and some slight syntax
variation is to be expected. In particular, most do not support the
use of labels (:name) or branch instructions (b,t) within editing
commands, except at the end of those commands. We have used the syntax
which will be portable to most users of sed, even though the popular
GNU versions of sed allow a more succinct syntax. When the reader sees
a fairly long command such as this:

sed -e '/AAA/b' -e '/BBB/b' -e '/CCC/b' -e d

it is heartening to know that GNU sed will let you reduce it to:

sed '/AAA/b;/BBB/b;/CCC/b;d' # or even
sed '/AAA\|BBB\|CCC/b;d'

In addition, remember that while many versions of sed accept a command
like "/one/ s/RE1/RE2/", some do NOT allow "/one/! s/RE1/RE2/", which
contains space before the 's'. Omit the space when typing the command.

OPTIMIZING FOR SPEED: If execution speed needs to be increased (due to
large input files or slow processors or hard disks), substitution will
be executed more quickly if the "find" expression is specified before
giving the "s/.../.../" instruction. Thus:

sed 's/foo/bar/g' filename # standard replace command
sed '/foo/ s/foo/bar/g' filename # executes more quickly
sed '/foo/ s//bar/g' filename # shorthand sed syntax

On line selection or deletion in which you only need to output lines
from the first part of the file, a "quit" command (q) in the script
will drastically reduce processing time for large files. Thus:

sed -n '45,50p' filename # print line nos. 45-50 of a file
sed -n '51q;45,50p' filename # same, but executes much faster

If you have any additional scripts to contribute or if you find errors
in this document, please send e-mail to the compiler. Indicate the
version of sed you used, the operating system it was compiled for, and
the nature of the problem. Various scripts in this file were written
or contributed by:

Al Aab # "seders" list moderator
Edgar Allen # various
Yiorgos Adamopoulos
Dale Dougherty # author of "sed & awk"
Carlos Duarte # author of "do it with sed"
Eric Pement # author of this document
Ken Pizzini # author of GNU sed v3.02
S.G. Ravenhall # great de-html script
Greg Ubben # many contributions & much help

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Monday, March 30, 2009

imikimi customize your world

In case you are wondering what this is read on...

imikimi is a website wherein you can creae customizable Comments, Images, Animations, Photos, Frames and Graphics for MySpace, Hi5, Orkut, Friendster and Facebook. If you like pimiping out your MySpace, Hi5, Orkut, Friendster and Facebook site then this is the site you need. I honestly did not know about imikimi until I saw one kid browsing it and creating his own images for his friendster page. As I watched him created the images he needed I was curious on what else it can do and found out that the things you can do with this site; imikimi is awesome and can only be bound on how much creativity andimagination you have.

Check-out imikimi site now and try it out. Start making your kimis and whou knows you may be one of the top kimi artists :)

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Saturday, March 28, 2009

Website Grader

Lately in my search for how to better optimize this site I've encountered great sites for SEO tools and SEO tips. In one of those days as I sat browsing and blog hopping I got a chance to visit this site...

Well the site I'm referring about is website grader. This site from Hubspot check and generates report about your website and how to optimize it further...it aso gives you a grade (like in school hehe).

So if you are like me and always looking at how to improve your site, I guess at least you can try this site to have something to refer to as base.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Friday, March 27, 2009

Twiter twiter | tweet me

I was introduced to twiter a year ago by one of my officemates and registered to try it out. Twiter is a social networking and sort of a blogging service that makes its users capable of sending and reading other users' updates known as tweets.

The users can send and receive updates via the Twitter website, SMS, RSS (receive only), or via third party applications such as Tweetie, Twitterrific, Twitterfon,TweetDeck and Feedalizr. It is free to use over the web, but using SMS is not and is given phone services provider fees.

Twiter has grown in popularity in certain locations and is still on the rise. I've personally neglected the service (since I'm a cheapskate, and don't want o pay for SMS charges from my provider hehehe).

Any ways twiter is a good alternative on getting family and friends updated on what you are doing =D.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Thursday, March 26, 2009

Optimize Blogger Title

For the longest time I've thought about modifying the title tag for posts that appeared on my blogger page. I've finally found time to optimize blogger titles in this blog.

The road to optimize blogger title was such a bleek one specially when you are not using the classic blogger. I have a searched results from "optimize blogger title", "blogger title optimization" only to find out the most of them showed procedure for the old blogger. Fortunately I was able to find a couple of site the showed how to optimize blogger title for the new blogger, after some trial and error I was able to successfully save the template I was using with the optimized blogger title code with it.

Here's how I did it:

Go to the tab "Design" and subsequently to the “Edit HTML”.

Find the code for the titles:


Go to the tab “Design” and subsequently to the “Edit HTML”.

Find the code for the titles:

<b:include data='blog' name='all-head-content'/>
<title><data:blog.pageTitle/></title>
<b:skin><![CDATA[/*


Replace with this:

<b:include data='blog' name='all-head-content'/>
<b:if cond='data:blog.pageType == "item"'>
<title><data:blog.pageName/> | <data:blog.title/></title>
<b:else/>
<title><data:blog.pageTitle/></title>
</b:if>
<b:skin><![CDATA[/*

Thus the title for the homepage, archives and labels remain unchanged (blog name: element) and the titles of the entries are optimized (name blog entry name).

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Wednesday, March 25, 2009

Free Blogger Templates | Free Blogger Layouts | Free Blogger Themes

If you are like me and wants to get a better looking Blogger template or layout for your blog without having to pay for it then here's some free blogger template sites of free blogger layout websites where you can download.

Personally, I have been using blogger templates from eblog templates . They have lots of free blogger templates that are also termed by them as "make money blog template". Most of the free blogger template they offer is very modern and already web 2.0. The free blogger templates also have built in ad locations where you can put your adsense or other ads. In eblog templates you can also find wordpress themes or wordpress layouts for your wordpress blogs. Eblog templates also offer premium templates you can buy for i guess a readonable price. To download their free blogger templates you just have to sign-in ( sign-up is free).


Another site I'm fond of getting free blogger templates from freetemplates . In this site you can find New (XML) and Classic (HTML) Blogger Template Layout for your blogspot powered blog.

Gecko and Fly have a nice list of well designed Blogger Templates . Most are designs imported in from WP designs that are quite unlike Blogger.com default designs. I personally like their lightbox template which I used in my early photoblog.

So there you go... sites for great free blogger templates download. If you've got other great sites that offer free blogger template download as well as free wordpress theme download you can just list it via the comments here.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Tuesday, March 24, 2009

Stale NFS file handle error

This error message usually happens seen when a file or directory that was opened by an NFS client is removed, renamed, or replaced.

In order to fix this problem, the NFS file handles must be renegotiated.

You can Try one of these on the client machine:

1.Unmount and remount the file system, may need to use the -O (overlay option) of mount.

From the man pages:
-O Overlay mount. Allow the file system to be mounted over an existing mount point, making the underlying file system inaccessible. If a mount is attempted on a pre-existing mount point without setting this flag, the mount will fail, producing the error "device busy".

2. Kill or restart the process trying to use the nonexistent files.

3. Create another mount point and access the files from the new mount point.

4. Run: /etc/init.d/nfs.client stop; /etc/init.d/nfs.client start

5. Reboot the client having problems.

You can also try a process restart of the NFS server daemons.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Monday, March 23, 2009

Creating user access restrictions:

Got this great notes from brandonhutchinson.com. It's a must to read through it.


Email-only access
Create a user account with a home directory of /dev/null and a shell that does not permit logins, such as /bin/false or /dev/null.

FTP-only access
Set the user's shell to one that does not permit logins, such as /bin/false or /dev/null.

Note: your FTP server may require that the user's shell is listed in the /etc/shells file.

Preventing FTP access
Add the user's account name into /etc/ftpusers.

Restricted access
Set the user's shell to a restricted shell such as /bin/rksh or /bin/rsh.

This prevents:
1. Use of the cd command
2. Setting or changing the PATH variable
3. Specifying a command or filename containing a slash (/) -- only filenames in the current directory can be used4. Using output redirection (> or >>).

Restricting by user group

Add the following to /etc/profile:

if [ -n "`groups grep {group_name}'" ] ; then
echo "Users from group {group_name} cannot login to this machine."
exit 1
fi

This would restrict telnet and rsh access for users using Bourne shell or Korn shell. C shell users would still be able to access the machine.

The following will restrict the C Shell as well as Bourne and Korn shells under Solaris 2.6, 7, 8, and 9 systems:

Create a text file called:/etc/su_users.txt

This will have any entries of usernames like this :
luke
hans
leia

Add the following code to the /etc/profile file:

# 04-26-2002 - Restricts telnet and ssh access for batch user accounts
# Bourne (sh) and Korn (ksh) shell users use the script in the /etc/profile file
# C (csh) shell users use the script in the /etc/.login file
# The /etc/su_users.txt file contains the list of batch accounts.

TTY=`tty awk -F/ '{printf ($3"/"$4)}'`
USER_TTY=`w awk '(\$2=="'$TTY'"){print \$1}'`
for USERID in `cat /etc/su_users.txt`
do
if [ "$USER_TTY" = "$USERID" ]
then
echo
echo Interactive logins for the $USER_TTY user are disabled.
echo Please login with your user id and do a su - $USER_TTY.
echo
exit
fi
done

Add the following code to the /etc/.login file:

# 04-26-2002 - Restricts telnet and ssh access for batch user accounts# Bourne (sh) and Korn (ksh) shell users use the script in the /etc/profile file
# C (csh) shell users use the script in the /etc/.login file
# The /etc/su_users.txt file contains the list of batch accounts.

set TTY=`tty awk -F/ '{printf ($3"/"$4)}'`
set USER_TTY=`wawk '{if ($2=="'$TTY'") print $1}'`
foreach USERID (`cat /etc/su_users.txt`)
if ( "$USER_TTY" == "$USERID" ) then
echo
echo Interactive logins for the $USER_TTY user are disabled.
echo Please login with your user id and do a su - $USER_TTY.
echo
logout
endif
end

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Sunday, March 22, 2009

Gzip to a different directory

There are times that you have to extract files to a different directory since if you extract it in the current directory you'll end up with the filesystem full.

Here's how to do a gzip to a diffrent directory:

Compress the files to an alternate (larger) file system using gzip's --to-stdout (-c) and shell
redirection, remove the original file, and move the compressed file to the original location.


To show it more clearly here's an example:

# pwd/var/adm

# ls -l wtmpx.OLD -rw-r--r-- 1 root other 591931980 Aug 6 10:30 wtmpx.OLD

# df -k /varFilesystem kbytes used avail capacity Mounted on/dev/md/dsk/d3 1016122 869649 85506 96% /var

# df -k /files0Filesystem kbytes used avail capacity Mounted on/dev/md/dsk/d5 31387621 2365849 28707896 8% /files0

# gzip --best --to-stdout wtmpx.OLD > /files0/wtmpx.OLD.gz# rm wtmpx.OLD# mv /files0/wtmpx.OLD.gz .

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Saturday, March 21, 2009

Copying Downloaded Free DS ROM for DS lite (Free DS Lite Games Download)

I guess since we got this started, if you haven't been reading my post you can catch up by reading back to my earlier posts Installing the R4DS from scratch DS lite R4 and Nintendo DS Lite Roms Download (for NDS Games).

So now you have R4 on your DS lite and have the free download DS lite games the next question will naturally be: HOw do I copy the Free DS Lite Games that I downloaded?

So be a good boy and read on...

1. Unzip the zip file you’ve downloaded using your favorite unzip application but keep the original zip file for future use (sounds good to burn your free nds lite games in a CD).

2. Remove your microSD card from the R4DS cartridge.

3. Insert your microSD card to the microSD USB adaptor that you got when you bought your R4DS.

4. Insert the microSD USB adaptor (where your microSD card is inserted) to your PC’s USB port. Windows will automatically detect it.

5. Go to My Computer. You will see a new drive in it. I'ts your microSD card. Double click it.

6. Copy the .NDS file that you unzipped on step 1 and paste it on the drive that you opened on step 5.

Never paste it to any directory. Just paste it on the root directory. You only need to copy the .NDS files, no more, no less, ignore the other files that goes with the zip file.

7. Safely remove your USB drive when done copying all the NDS files.

8. Remove the microSD card from the microSD USB adaptor.

9. Insert the microSD card to your R4DS cartridge.

10. Insert your R4DS cartridge to your Nintendo DS Lite.

11. Turn on your Nintendo DS Lite.

If you followed all the directions above correctly, you’ll be able to see the copied games on the list of games on your R4DS now.

Enjoy!

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Friday, March 20, 2009

Installing the R4DS from scratch | DS lite R4

Have you ever wanted to run homebrew applications on your DS Lite (NDS or Nintendo DS Lite) or downloaded ROMS. On one of my posts it showed where to download free NDS or NDS lite games, now heres how to install R4(NDS Lite R4) from scratch.

Files you will need:


English-1.18.rar
moonshell171p1_with_dpgtools131.zip
DSOrganize_3_2.zip
es_r4m3.zip

Hardware you'll need:

Revolution for DS (R4DS):

Your microSD - 2GB or below; received news that 4GB may not work (haven't tested though, since I don't have one yet; sorry);

USB reader for the microSD (either using the mirco-SD to SD adapter and USB card reader or the MicroSD USB reader would do).

Ensure that the microSD is empty and that you have backed up everything first before proceeding further.

Procedure:


Install the R4DS kernel file.Either download the kernel file from above English-1.18.rar, or the latest one from http://www.r4ds.com/, if it's up already (it's been down for quite a while now). Unzip the contents of the kernel file. You should end up with _system_ folder/directory, DS_MENU.DAT, and _DS_MSHL.NDS.

Connect your R4DS into your PC via your card reader.
Copy all of the files, into the microSD.

Update MoonshellYou'll be using the moonshell171p1_with_dpgtools131.zip file (or get latest version from http://mdxonlinemirror.dyndns.org/moonshell/files/) to update the moonshell. Do the following:

Ensure that the microSD is plugged in to your PC;
Unzip the moonshell zip file into a temporary directory;
Run setup.exe in that temporary directory; it would automatically detect the drive assigned to your microSD

Click OK

Configure the moonshell as follows:

Configuration files: Select moonshl.ini (full) is copied.
ROM Files: Select R4TF R4(DS) - Revolution for DS only.
Other configs: this is up to you :)

Click OK then wait for the setup to Finish
Verify that the microSD has the MoonShell_R4TF_M3Simply-R4DS (MicroSD Card).nds and moonshl folder/directory.

Copy the NDS Game RomsI would suggest that you create a new folder/subdirectory, say, named Games and then each ROM would have its own directory.

This is entirely up to you. I suggest the above just for easier organization.

Install DSOrganize
DSOrganize is one of the most popular homebrew organizer application for the Nintendo DS. It provides a calendar, drawing pad, todo list, address book, and web browser. Almost like a PDA, but not quite.

You will need the DSOrganize file and the custom exec_stub.bin file. You can download both from http://www.dragonminded.com/?loc=ndsdev/DSOrganize or you can use the DSOrganize_3_2.zip and es_r4m3.zip above.

Unzip the DSOrganize zip file into your hard drive;
Unzip the exec_stub.bin file from es_r4m3.zip.
Replace the exec_stub.bin file in the DSOrganize/RESOURCES directory.
Copy the entire DSOrganize folder and DSOrganize.nds file into the root directory of the microSD.

And that's it. I had no time to play with the DSOrganize plugins, though, so I can't comment on them yet.

OPTIONAL: Customize Theme

First, it is important to know how the images in the NDS are mapped to which files first. So here's the rough description:

logo.bmp - Bootup top screen image;
icons.bmp - Bootup bottom/touchscreen image (i.e., "Play", "Media", or "Slot2");
bckgrd_1.bmp - the browser top screen background image;
bckgrd_2.bmp - the browser bottom/touchscreen background image;
gbaframe.bmp - the border used for GameBoy Advanced (GBA) games (slot 2).

Hence, when using a theme, you need to rename the images to their corresponding filenames above. (You wouldn't want the top and bottom screen images on the wrong screen, would you?)

Here are some important notes to remember:

The files should be copied into the _system_ folder;
The image size should be 256x192;
The file format should be a 24bit Bitmap (BMP).

If you prefer downloading instead of creating your own, search the web. I suggest the site http://www.ndsthemes.com/, which, thankfully, is still up and running.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Thursday, March 19, 2009

Open Up YOur OS | Open source operating systems

There are dozens of free and open source operating systems out there, like those based on Unix, BSD, even Amiga and MS-DOS. Some are very advanced and prime-time ready, others alittle more than programming experiments in perpetual alpha mode (i.e. not ready for the vast majority of computer users to install, especially not a primary OS). There are operating systems that are the result of one programmer's hard work, as a labour of love, there are others with a huge development community. There are those based on an open source foundation and built up then sold for free and there are others that are entirely free to use.

In short, there's enough fodder for discussion to fill a book... perhaps even a series of books.

We're not going to try to guide an authoritative guide on free and open source operating systems -- indeed, there are too many options to make that possible. Rather,we're going to give a run-down of some operating systems that we've tried, that we've liked and that we might even consider some command line tweaking and boot table configuring to run on our home systems as a secondary (perhaps even primary)
operating systems environment.


***How did we got here?***

It wasn't long ago that Linux was synonymous with command line hackery, obtuse text commands rather than the pretty graphical user interfaces that most of us are much more familiar with, incompatibilities with software we need to use everyday and so on.

However, open source and free software have made great strides over the past several years. Specialized applications aside, if there's something you do on a Windows or Mac machine, chances are there's equivalent software to do the same thing on an open source OS.

Many people's first real exposure to operating systems based on open source code came with the netbook trend. In keeping prices down, these diminutive mini netbooks cut back on system specs and used free, often custom, operating systems. The original Eee PC, the Acer Aspire One and even netbooks from companies like Dell and HP had a low-end model that used an operating system based on open source code.


Provided users weren't looking for a primary PC, which is to say assuming they had a desktop or laptop at home or in the office, the Linux based operating systems on the aforementioned netbooks and the provided software would be more than sufficient for general computing. Consider too that the whole netbook category is based on the "cloud computing" concept where your files are hosted "in the cloud" with services like Flicker and Picasa for pictures and Google Docs or Zoho for documents, spreadsheets and other office staples.


Ubuntu
Website: www.ubuntu.com
Current stable version: 8.10
Download size: 699MB

No discussion of open source operating systems would be complete without talking about Ubuntu (pronounced "oo-boon-too"). This Linux distro has reached the mainstream.

Ubuntu itself, along with derivatives like the KDE-based Kubuntu and the education specific Edubuntu install are intended to be used on reasonably good-spec machines. If you can run Windows XP on your PC, your in good shape. If your machine can run Vista without issues you can use all the fancy Compiz effects like3D desktop transitions, wobbly windows and so on.

Xubuntu is another Ubuntu derivative based on XFCE and is much lighter on system requirements. I travelled South East Asia for three months with just my Acer TravelMate 340T running Xubuntu 7.04 and was very happy with the results. I was able to do everything I needed to do: file stories, manage pictures, send and receive mail, use web services to book accomodations and the likes. The machine (which I still have and occasionally use) was "designed for Windows 98" and choked on Winods XP.

There are a number of Ubuntu derivatives, some of which we'll get below. These are application specific distos, tweaked with software and services for a specific application. Linux Mint for beginners, MythBuntu for media center PCs, Easy Peasy which evolved from the Ubuntu Eee project (not to be confused with Eeebuntu).

Ubuntu and it's offspring promote ease of use. Given a reasonable adjustment period, savvy PC users can expect to get used to Ubuntu's intricacies and, specialized applications aside, will be able to do everything they did on their Windows and Mac OSX machines.


Puppy Linux
Web site: www.puppylinux.org
Current stable version: 4.1.2
Download size: 94MB

After I got back to Canada after the aforementioned SEA trip, I learned about Puppy Linux and made the switch on my old laptop. It's an impressive distro, the core of which is created and maintained by Barry Kauler. Whn I first tried Puppy Linux out, I was amazed by the fact that everything just seemed to work,

right out of the box ---or rather, right off the recently burned disc. Puppy Linux can be installed as a main or secondary operating system, but can also be run right off a live CD. Unlike many other live discs, The Puppy disc can be removed once the sysstem is booted: it's light weight and can be ran entirely in RAM. Since I discovered Puppy Linux, I always have a live disc on-hand as it boots on literally every machine I've tried it in and gives a graphical disk partitioner, a web browser, networking ( aftr a quick guide and install) and well thought out wizards that explain in plain English what's going on, Puppy Linux is what I turn to when all else fails.

In addition to running in RAM, Puppy can also be installed as a primary or secondary OS. It's perfect for older machines that have a hard time running Ubuntu or Windows and does basic computing tasks like word processing, sreadsheets, web browsing, e-mail and more without needing to install anything.


DSL---Damn Small Linux
Web site: damnsmalllinux.org
Current stable version: 4.4.10
Download size: 50MB

Another light install that seems to work. Damn Small Linux (DSL, not to be confused with Direct Subscriber Line Interneet service) started asthe web site says, as an experiment to see how much could be fit onto a 50MB live install disc. The idea was to fit as much as possible onto a business card CD (remember those?) as a live Linux distro. It has a long list of developers from all over the world and is well supported. And, like Puppy Linux, it can run entirely in RAM. Perfect for old machines that have been left in the dust by the marching-ever-on progress of commercial operating systems. DSL is light enough to run on 486 with a mere 16MB of RAM.

As the name suggests, DSL's highlight is the small footpriint. Though it's refined and works well, it's not as user friendly as some other distros we've tried. That said, the initial install process is simple and it does the basics well. If your Net connection sucks, if your service provider throttles bandwidth (hint:if it's one of the major, they do), if you're on a "Lite" connection or even dial-up or
if you suffer under draconian bandwidth caps, the small size of the install ISO download make it well worth a try.


Easy Peasy
Web site: www.geteasypeasy.com
Current Stable version: 1.0
Download size: 865MB

EAsy Peasy evolved from the Ubuntu Eee project and changed it's name to avoid confusion with Eeebuntu.

As we hinted, it's based on Ubuntu, but it's lighter and uses a one click interface like the Linspire distro on the base model of the Acer Aspire One PC, System toools and applications are divided into categories that run along the left-hand side of the Desktop. Categories are broken down into Favourites, Accesories, Games, Graphics, Internet, Office, Sound and Video, Universal Access, Preferences and Administration.

Applications like Skype for VoIP calls, Firefox for browsing and Pidgin can be found in the internet category, among others. Office applications like OpenOffice.org word processing, presentations and spreadsheets are found in the Office category. Ther are a lot of free and open source apps installed by default so once you download the disc image and install it on your netbook or PC, you'll be presented
with a system that can do all of the basic computing tasks and more.

If you're a fan of the bse model Eee PC and Acer Aspipre One model of dividing applications into categories, doing away with the traditional desktop interface but suffer under the convuluted methods for installing applicatios you want (like Skype, Pidgin and the latest version of Firefox), Easy Peasy might be the answer.


Article by:Andrew Moore-Crispin
First printed in: Hub Computer Paper March 2009 Volume 22 Number 03
http://hubcanada.com

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Wednesday, March 18, 2009

Building a Solaris 10 Jumpstart in just minutes

In case you are looking for a fast way to to move or transfer an existing JumpStart tree to another Solaris 10 server and be able to use the JumpStart functionality just follow this steps:

Check and be sure that the following packages are installed on the new JumpStart server you are making:


SUNWnfssu Network File System (NFS) server support (Usr)
SUNWnfssr Network File System (NFS) server support (Root)
SUNWnfsskr Network File System (NFS) server kernel support (Root)
SUNWtftp Trivial File Transfer Server

On the new Solaris 10 JumpStart server do this:

# mkdir /export/install

# cd /export/install

# ssh old_JumpStart_server "cd /export/install; tar cf - ." tar xpf -

# echo 'share -F nfs -o ro,anon=0 -d "Jumpstart Directory" /export/install' >> /etc/dfs/dfstab

# shareall

# /usr/bin/echo "# TFTPD - tftp server (primarily used for booting)" >> /etc/inetd.conf

# /usr/bin/echo "tftp\tdgram\tudp6\twait\troot\t/usr/sbin/in.tftpd\tin.tftpd -s /tftpboot" >> /etc/inetd.conf

# inetconv

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Tuesday, March 17, 2009

Linux History

Ok , after those who pm'ed me read the post regarding UNIX history they came to ask about Linux. So for you who don't like using googel to get answers here's the story:

After all that happened as written in may previous post here's the rest:

The factors of a lack of a widely-adopted, free kernel provided the impetus for Torvalds's starting his project. He has stated that if either the GNU or 386BSD kernels were available at the time, he likely would not have written his own.

In 1991, in Helsinki, Linus Torvalds began a project that later became the Linux kernel. It was initially a terminal emulator, which Torvalds used to access the large UNIX servers of the university. He wrote the program specifically for the hardware he was using and independent of an operating system because he wanted to use the functions of his new PC with an 80386 processor. Development was done on Minix using the GNU C compiler, which is still the main choice for compiling Linux today (although the code can be built with other compilers, such as the Intel C Compiler).

As Torvalds wrote in his book Just for Fun, he eventually realized that he had written an operating system kernel. On 25 August 1991, he announced this system in a Usenet posting to the newsgroup "comp.os.minix.":


Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)
Linus (torvalds@kruuna.helsinki.fi)

PS. Yes – it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.



Linus Torvalds had wanted to call his invention Freax, a portmanteau of "freak", "free", and "x"(as an allusion to Unix). During the start of his work on the system, he stored the files under the name "Freax" for about half of a year. Torvalds had already considered the name "Linux," but initially dismissed it as too egotistical.

In order to facilitate development, the files were uploaded to the FTP server (ftp.funet.fi) of the Helsinki University of Technology (HUT) in September 1991. Ari Lemmke, Torvald's coworker at the HUT who was responsible for the servers at the time, did not think that "Freax" was a good name. So, he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux".

There you go and as thye say it there is history...

Details courtesy of wikipedia.org ( i told you guys it helps to search =D). So if you want to read more go type and head over there.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Monday, March 16, 2009

UNIX History

Well got some PMs by some kids interested or asking about UNIX , so I decided to do an short article here to introduce it here. So here it is:

The Unix operating system was conceived and implemented in the 1960s and first released in 1970. Its availability and portability caused it to be widely adopted, copied and modified by academic institutions and businesses. Its design became influential to authors of other systems.


In 1983, Richard Stallman started the GNU project with the goal of creating a free UNIX-like, POSIX-compatible operating system. As part of this work, he wrote the GNU General Public License (GPL). By the early 1990s there was almost enough available software to create a full operating system. However, the GNU kernel, called Hurd, failed to attract enough attention from developers leaving GNU incomplete.


Another free operating system project in the 1980s was the Berkeley Software Distribution (BSD). This was developed by UC Berkeley from the 6th edition of Unix from AT&T. Since BSD contained Unix code that AT&T owned, AT&T filed a lawsuit (USL v. BSDi) in the early 1990s against the University of California. This strongly limited the development and adoption of BSD.[

MINIX, a Unix-like system intended for academic use, was released by Andrew S. Tanenbaum in 1987. While source code for the system was available, modification and redistribution were restricted. In addition, MINIX's 16-bit design was not well adapted to the 32-bit features of the increasingly cheap and popular Intel 386 architecture for personal computers.

These factors of a lack of a widely-adopted, free kernel provided the impetus for Torvalds's starting his project. He has stated that if either the GNU or 386BSD kernels were available at the time, he likely would not have written his own

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Sunday, March 15, 2009

Apple to unveil next-generation iPhone OS

Was reading through the tech news at inquirer.net and saw this article regarding the iPhone, looks like they'll be introducing a new OS with new features for their phone.

Personally I don't own one, didn't see much reason to buy it. I'd much prefer a Blackberry Storm than an iPhone :) ( even though BB Storm doesn't have WiFi, heck i don't use Wifi anyway with my Phone). If this new "next-gen iPhone OS" will be great them we'll see :)

Here goes the article...

Agence France-Presse

First Posted 09:10:00 03/13/2009

SAN FRANCISCO – Apple plans to give the world a peek next week at its next-generation operating system for iPhones.

Apple on Thursday invited news reporters to a "town hall" event at its headquarters in Cupertino, California, and promised a "sneak peek" at the upcoming iPhone 3.0 operating system.

The maker of iPhones, iPods, and Macintosh computers said the event will center on a new software developers kit for its popular multi-purpose mobile devices.

More than 500 million applications for the Apple iPhone have been downloaded from the company's online App Store since it opened in July of last year.

A study released in February, however, indicates that users quickly lose interest in those applications.

The study by Pinch Media found that fewer than five percent of iPhone users are still actively using an application a month after downloading it.

Apple iPhone users lose interest in free programs slightly faster than in paid ones, said Pinch Media, a company which offers advice to developers of applications for the hot-selling smartphone.


Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Saturday, March 14, 2009

Unkillable process cleanup

In my years of being a UNIX system admin I've encoutered so much of these hanged unkillable preocesses even and have always wondered why.

Found some explanation UNIX Power Tools book by O'Reilly about it, here goes:

You or another user might have a process that (according to ps ) has been sleeping for several days, waiting for input. If you can't kill the process, even with kill -9, there may be a bug or some other problem.


These processes can be unkillable because they've made a request for a hardware device or network resource. Unix has put them to sleep at a very high priority and the event that they are waiting on hasn't happened (because of a network problem, for example). This causes all other signals to be held until the hardware event occurs. The signal sent by kill doesn't do any good.

If the problem is with a terminal and you can get to the back of the terminal or the back of the computer, try unplugging the line from the port. Also, try typing CTRL-q on the keyboard — if the user typed CTRL-s while getting a lot of output, this may free the process.

Ask your vendor if there's a special command to reset the device driver. If there isn't, you may have to reboot the computer.



Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Friday, March 13, 2009

Manually adding persistent dynamic binds using hbanywhere

If you want to dynamically add binding to specific targets so that they don't become automaunted on lower targets available.

The below is the procedure, more or less.

1) make sure they are still disabled on the switch (zoning is not done yet).

2) HBAanywere needs to be installed - typically is

3) run

/usr/sbin/hbanyware/hbacmd setpersistentbinding [wwpn]B P [hba's][scsi][target]

Where, WWPN - is the device's wwnp they you need to add SCSI id is typically 0

Example

Query persistent binding on HBA 10:00:00:00:C9:48:26:10

/usr/sbin/hbanyware/hbacmd persistentbinding 10:00:00:00:C9:48:26:10 L

Set binding for target 130 immediatelly and update lpfc.conf

/usr/sbin/hbanyware/hbacmd setpersistentbinding 10:00:00:00:C9:4B:C2:5F B P 50:05:07:63:0f:4f:6f:26 0 130

4) Now you can activate new target on the switch (update zoning)


Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Thursday, March 12, 2009

Procedure for removing SAN luns from a system

This will ensure a clean system after lun removal.
Here's what I did:


1) /etc/vx/bin/vxdiskunsetup emcpowerXX

Put the luns back.

2) powermt display dev=all

saw that both paths for the luns were 'dead'

3) powermt remove dev=emcpowerXX

4) devfsadm -C (as Charles suggests below)

5) powercf -q (shows that reverse of adding: path removed messages)

5.5) powermt config

6) vxdctl enable (will show error state for luns removed)

emcpower2s2 auto - - error

7) vxdisk rm emcpowerXXs2

now everything is cleaned!

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Wednesday, March 11, 2009

Solaris system information gathering

There are times when a UNIX system administrator is asked to get system information on the servers he handles. Here are old Solaris 7 commands to do it.


Processors
The psrinfo utility displays processor information. When run in verbose mode, it lists the speed of each processor and when the processor was last placed on-line (generally the time the system was started unless it was manually taken off-line).

/usr/sbin/psrinfo -v
Status of processor 1 as of: 12/12/02 09:25:50
Processor has been on-line since 11/17/02 21:10:09.
The sparcv9 processor operates at 400 MHz,
and has a sparcv9 floating point processor.
Status of processor 3 as of: 12/12/02 09:25:50
Processor has been on-line since 11/17/02 21:10:11.
The sparcv9 processor operates at 400 MHz,
and has a sparcv9 floating point processor.

The psradm utility can enable or disable a specific processor.

To disable a processor:
/usr/sbin/psradm -f processor_id

To enable a processor:
/usr/sbin/psradm -n processor_id

The psrinfo utility will display the processor_id when run in either standard or verbose mode.


RAM
The prtconf utility will display the system configuration, including the amount of physical memory.

To display the amount of RAM:

/usr/sbin/prtconf | grep Memory
Memory size: 3072 Megabytes



Disk space
Although there are several ways you could gather this information, the following command lists the amount of kilobytes in use versus total kilobytes available in local file systems stored on physical disks. The command does not include disk space usage from the /proc virtual file system, the floppy disk, or swap space.

df -lk | egrep -v "Filesystem|/proc|/dev/fd|swap" | awk '{ total_kbytes += $2 } { used_kbytes += $3 } END { printf "%d of %d kilobytes in use.\n", used_kbytes, total_kbytes }'
19221758 of 135949755 kilobytes in use.

You may want to convert the output to megabytes or gigabytes and display the statistics as a percentage of utilization.

The above command will list file system usage. If you are interested in listing physical disks (some of which may not be allocated to a file system), use the format command as the root user, or the iostat -En command as a non-privileged user.



Processor and kernel bits
If you are running Solaris 2.6 or earlier, you are running a 32-bit kernel.

Determine bits of processor:
isainfo -bv

Determine bits of Solaris kernel:
isainfo -kv

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Tuesday, March 10, 2009

Checking which Solaris cluster is installed

To determine which "cluster" of the Solaris Operating Environment you have installed:
$ cat /var/sadm/system/admin/CLUSTER


Explanation of value returned:

SUNWCrnet Reduced Networking Core System Support
SUNWCreq Core System Support
SUNWCuser End User System Support
SUNWCprog Developer System Support
SUNWCall Entire Distribution
SUNWCXall Entire Distribution plus OEM support

To list the packages in each of the clusters above, view the /var/sadm/system/admin/.clustertoc file. This file also contains descriptions of the clusters listed above.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Monday, March 9, 2009

Deleting ZFS slices from a disk

ZFS is great, you can have tons of filesystems in a zpool, you can create raidz/mirror/stripes with just one command, it has compression, quotas and every other cool feature that you can think about.

But what happens if you have to move a ZFS disk to a Solaris 8/9 system?


If you want to use a ZFS’ed disk on an older system you’ve to convert the labels using:

format -e

and then relabel the disk and choose “0″ at the Label prompt:

# format -e c0t8d0
selecting c0t8d0
[disk formatted]

FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
! - execute , then return
quit
format> l
format> l
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
Continue? y
Auto configuration via format.dat[no]? y
format>


Now you can just partition your drive with format or use prtvtoc and fmthard.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Sunday, March 8, 2009

Solaris 10 and IPMP

I've been wanting to set-up IPMP on Solaris 10 and I've been receiving PMs on how to do it too.

These here arre examples.

So here goes...


Setting IPMP with 2 physical ethernet using 3 IP Address

# more /etc/hosts
127.0.0.1 localhost
192.168.0.2 server1
192.168.0.3 server1-bge0
192.168.0.4 server1-bge1

# more /etc/netmasks
192.168.0.0 255.255.255.0

# more /etc/defaultrouter
192.168.0.1

# more /etc/hostname.bge0
server1 netmask + broadcast + group ipmp0 up \
addif server1-bge0 deprecated -failover netmask + broadcast + up

# more /etc/hostname.bge1
server1-bge1 deprecated -failover netmask + broadcast + group ipmp0 up
Now we have 3 IP address, 2 physical (192.168.0.3 & 192.168.0.4) and 1 virtual (192.168.0.2), if one of physical ip has disrupted, we still have 192.168.0.2 alive.

# reboot


Setting IPMP with 2 physical ethernet using 1 IP address
# more /etc/hosts
127.0.0.1 localhost
192.168.0.224 server1 loghost

# more /etc/hostname.bge0
server1 netmask + broadcast + group ipmp0 up

# more /etc/hostname.bge1
group ipmp0 up

# reboot


Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Saturday, March 7, 2009

Francis M Magalona Dead at 44

I am very sad to hear that a music icon, talented man and a proud Filipino passed away. My condolences to Francis M's family.



Rapper, actor and TV host Francis Magalona, diagnosed with leukemia last year, died Friday at the Medical City hospital. He was 44.

Survived by his his wife Pia Arroyo and eight children--Unna, Nicolo, Francis Jr., Isabella, Elmo, Arkin, Clara, and actress Maxene Magalona.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com

Linux tips every geek should know

Courtesy of: Linux Format magazine

What separates average Linux users from the super-geeks? Simple: years spent learning the kinds of hacks, tricks, tips and techniques that turn long jobs into a moment's work. If you want to get up to speed without having to put in all that leg-work, we've rounded up over 50 easy-to-learn Linux tips to help you work smarter and get the most from your computer. Enjoy!

#1: Check processes not run by you

Difficulty: Expert
Application: bash

Imagine the scene - you get yourself ready for a quick round of Crack Attack against a colleague at the office, only to find the game drags to a halt just as you're about to beat your uppity subordinate - what could be happening to make your machine so slow? It must be some of those other users, stealing your precious CPU time with their scientific experiments, webservers or other weird, geeky things!

OK, let's list all the processes on the box not being run by you!

ps aux grep -v `whoami`

Or, to be a little more clever, why not just list the top ten time-wasters:

ps aux --sort=-%cpu grep -m 11 -v `whoami`

It is probably best to run this as root, as this will filter out most of the vital background processes. Now that you have the information, you could just kill their processes, but much more dastardly is to run xeyes on their desktop. Repeatedly!

#2: Replacing same text in multiple files

Difficulty: Intermediate
Application: find/Perl

If you have text you want to replace in multiple locations, there are several ways to do this. To replace the text Windows with Linux in all files in current directory called test[something] you can run this:

perl -i -pe 's/Windows/Linux/;' test*

To replace the text Windows with Linux in all text files in current directory and down you can run this:

find . -name '*.txt' -print xargs perl -pi -e's/Windows/Linux/ig' *.txt

Or if you prefer this will also work, but only on regular files:

find -type f -name '*.txt' -print0 xargs --null perl -pi -e 's/Windows/Linux/'

Saves a lot of time and has a high guru rating!


#3: Fix a wonky terminal

Difficulty: Easy
Application: bash

We've all done it - accidentally used less or cat to list a file, and ended up viewing binary instead. This usually involves all sorts of control codes that can easily screw up your terminal display. There will be beeping. There will be funny characters. There will be odd colour combinations. At the end of it, your font will be replaced with hieroglyphics and you don't know what to do. Well, bash is obviously still working, but you just can't read what's actually going on! Send the terminal an initialisation command:

reset

and all will be well again.


#4: Creating Mozilla keywords

Difficulty: Easy
Application: Firefox/Mozilla

A useful feature in Konqueror is the ability to type gg onion to do a Google search based on the word onion. The same kind of functionality can be achieved in Mozilla by first clicking on Bookmarks>Manage Bookmarks and then Add a New Bookmark.

Add the URL as:

http://www.google.com/search?q=%s

Now select the entry in the bookmark editor and click the Properties button. Now enter the keyword as gg (or this can be anything you choose) and the process is complete. The %s in the URL will be replaced with the text after the keyword. You can apply this hack to other kinds of sites that rely on you passing information on the URL.

Alternatively, right-click on a search field and select the menu option "Add a Keyword for this Search...". The subsequent dialog will allow you to specify the keyword to use.

#5: Running multiple X sessions

Difficulty: Easy
Application: X

If you share your Linux box with someone and you are sick of continually logging in and out, you may be relieved to know that this is not really needed. Assuming that your computer starts in graphical mode (runlevel 5), by simultaneously pressing the keys Control+Alt+F1 - you will get a login prompt.

Insert your login and password and then execute:

startx -- :1

to get into your graphical environment. To go back to the previous user session, press Ctrl+Alt+F7, while to get yours back press Ctrl+Alt+F8.

You can repeat this trick: the keys F1 to F6 identify six console sessions, while F7 to F12 identify six X sessions. Caveat: although this is true in most cases, different distributions can implement this feature in a different way.

#6: Faster browsing

Difficulty: Easy
Application: KDE

In KDE, a little-known but useful option exists to speed up your web browsing experience. Start the KDE Control Center and choose System > KDE performance from the sidebar. You can now select to preload Konqueror instances. Effectively, this means that Konqueror is run on startup, but kept hidden until you try to use it. When you do, it pops up almost instantaneously. Bonus! And if you're looking for more KDE tips, make sure you check out 20 all-new KDE 4.2 tips at tuxradar.

#7: Backup your website easily

Difficulty: Easy
Application: Backups

If you want to back up a directory on a computer and only copy changed files to the backup computer instead of everything with each backup, you can use the rsync tool to do this. You will need an account on the remote computer that you are backing up from.

Here is the command:

rsync -vare ssh jono@192.168.0.2:/home/jono/importantfiles/* /home/jono/backup/

Here we are backing up all of the files in /home/jono/importantfiles/ on 192.168.0.2 to /home/jono/backup on the current machine.

#8: Keeping your clock in time

Difficulty: Easy
Application: NTP

If you find that the clock on your computer seems to wander off the time, you can make use of a special NTP tool to ensure that you are always synchronised with the kind of accuracy that only people that wear white coats get excited about. You will need to install the ntpdate tool that is often included in the NTP package, and then you can synchronise with an NTP server:

ntpdate ntp.blueyonder.co.uk

A list of suitable NTP servers is available at www.eecis.udel.edu/~mills/ntp/clock1b.html. If you modify your boot process and scripts to include this command you can ensure that you are perfectly in time whenever you boot your computer. You could also run a cron job to update the time.

#9: Finding the biggest files

· Difficulty: Easy
· Application: Shell

A common problem with computers is when you have a number of large files (such as audio/video clips) that you may want to get rid of. You can find the biggest files in the current directory with:

ls -lSrh

The "r" causes the large files to be listed at the end and the "h" gives human readable output (MB and such). You could also search for the biggest MP3/MPEGs:

ls -lSrh *.mp*

You can also look for the largest directories with:

du -kx egrep -v "\./.+/" sort -n

#10: Nautilus shortcuts

· Difficulty: Easy
· Application: Nautilus

Although most file managers these days are designed to be used with the mouse, it's also useful to be able to use the keyboard sometimes. Nautilus has a few keyboard shortcuts that can have you flying through files:

· Open a location - Ctrl+L
· Open Parent folder - Ctrl+Up
· Arrow keys navigate around current folder.

You can also customise the file icons with 'emblems'. These are little graphical overlays that can be applied to individual files or groups. Open the Edit > Backgrounds and Emblems menu item, and drag-and-drop the images you want.

#11: Defrag your databases

· Difficulty: Easy
· Application: MySQL

Whenever you change the structure of a MySQL database, or remove a lot of data from it, the files can become fragmented resulting in a loss of performance, particularly when running queries. Just remember any time you change the database to run the optimiser:

mysqlcheck -o

You may also find it worth your while to defragment your database tables regularly if you are using VARCHAR fields: these variable-length columns are particularly prone to fragmentation.

#12: Quicker emails

· Difficulty: Easy
· Application: KMail

Can't afford to waste three seconds locating your email client? Can't be bothered finding the mouse under all those gently rotting mountains of clutter on your desk? Whatever you are doing in KDE, you are only a few keypresses away from sending a mail. Press Alt+F2 to bring up the 'Run command' dialog. Type:

mailto:plop@ploppypants.com

Press return and KMail will automatically fire up, ready for your words of wisdom. You don't even need to fill in the entire email address. This also works for Internet addresses: try typing www.slashdot.org to launch Konqueror.

#13: Parallelise your build

· Difficulty: Easy
· Application: GCC

If you're running a multiprocessor system (SMP) with a moderate amount of RAM, you can usually see significant benefits by performing a parallel make when building code. Compared to doing serial builds when running make (as is the default), a parallel build is a vast improvement. To tell make to allow more than one child at a time while building, use the -j switch:

make -j4; make -j4 modules

#14: Save battery power

· Difficulty: Intermediate
· Application: hdparm

You are probably familiar with using hdparm for tuning a hard drive, but it can also save battery life on your laptop, or make life quieter for you by spinning down drives.

hdparm -y /dev/hdb
hdparm -Y /dev/hdb
hdparm -S 36 /dev/hdb

In order, these commands will: cause the drive to switch to Standby mode, switch to Sleep mode, and finally set the Automatic spindown timeout. This last includes a numeric variable, whose units are blocks of 5 seconds (for example, a value of 12 would equal one minute).
Incidentally, this habit of specifying spindown time in blocks of 5 seconds should really be a contender for a special user-friendliness award - there's probably some historical reason for it, but we're stumped. Write in and tell us if you happen to know where it came from!

#15: Wireless speed management

· Difficulty: Intermediate
· Application: iwconfig

The speed at which a piece of radio transmission/receiver equipment can communicate with another depends on how much signal is available. In order to maintain communications as the available signal fades, the radios need to transmit data at a slower rate. Normally, the radios attempt to work out the available signal on their own and automatically select the fastest possible speed.

In fringe areas with a barely adequate signal, packets may be needlessly lost while the radios continually renegotiate the link speed. If you can't add more antenna gain, or reposition your equipment to achieve a better enough signal, consider forcing your card to sync at a lower rate.

This will mean fewer retries, and can be substantially faster than using a continually flip-flopping link. Each driver has its own method for setting the link speed. In Linux, set the link speed with iwconfig:

iwconfig eth0 rate 2M

This forces the radio to always sync at 2Mbps, even if other speeds are available. You can also set a particular speed as a ceiling, and allow the card to automatically scale to any slower speed, but go no faster. For example, you might use this on the example link above:
iwconfig eth0 rate 5.5M auto

Using the auto directive this way tells the driver to allow speeds up to 5.5Mbps, and to run slower if necessary, but will never try to sync at anything faster. To restore the card to full auto scaling, just specify auto by itself:

iwconfig eth0 rate auto

Cards can generally reach much further at 1Mbps than they can at 11Mbps. There is a difference of 12dB between the 1Mbps and 11Mbps ratings of the Orinoco card - that's four times the potential distance just by dropping the data rate!

#16: Unclog open ports

· Difficulty: Intermediate
· Application: netstat

Generating a list of network ports that are in the Listen state on a Linux server is simple with netstat:

root@catlin:~# netstat -lnp

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN 698/perl
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 217/httpd
tcp 0 0 10.42.3.2:53 0.0.0.0:* LISTEN 220/named
tcp 0 0 10.42.4.6:53 0.0.0.0:* LISTEN 220/named
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 220/named
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 200/sshd
udp 0 0 0.0.0.0:32768 0.0.0.0:* 220/named
udp 0 0 10.42.3.2:53 0.0.0.0:* 220/named
udp 0 0 10.42.4.6:53 0.0.0.0:* 220/named
udp 0 0 127.0.0.1:53 0.0.0.0:* 220/named
udp 0 0 0.0.0.0:67 0.0.0.0:* 222/dhcpd
raw 0 0 0.0.0.0:1 0.0.0.0:* 7 222/dhcpd

That shows you that PID 698 is a Perl process that is bound to port 5280. If you're not root, the system won't disclose which programs are running on which ports.

#17: Faster Hard drives

· Difficulty: Expert
· Application: hdparm

You may know that the hdparm tool can be used to speed test your disk and change a few settings. It can also be used to optimise drive performance, and turn on some features that may not be enabled by default. Before we start though, be warned that changing drive options can cause data corruption, so back up all your important data first. Testing speed is done with:

hdparm -Tt /dev/hda

You'll see something like:

/dev/hda:
Timing buffer-cache reads: 128 MB in 1.64 seconds =78.05 MB/sec
Timing buffered disk reads: 64 MB in 18.56 seconds = 3.45MB/sec

Now we can try speeding it up. To find out which options your drive is currently set to use, just pass hdparm the device name:

hdparm /dev/hda

/dev/hda:
multcount = 16 (on)
I/O support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 0 (off)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 40395/16/63, sectors = 40718160, start = 0

This is a fairly default setting. Most distros will opt for safe options that will work with most hardware. To get more speed, you may want to enable dma mode, and certainly adjust I/O support. Most modern computers support mode 3, which is a 32-bit transfer mode that can nearly double throughput. You might want to try

hdparm -c3 -d1/dev/hda

Then rerun the speed check to see the difference. Check out the modes your hardware will support, and the hdparm man pages for how to set them.

#18: Uptime on your hands

· Difficulty: Expert
· Application: Perl

In computing, wasted resources are resources that could be better spent helping you. Why not run a process that updates the titlebar of your terminal with the current load average in real-time, regardless of what else you're running?

Save this as a script called tl, and save it to your ~/bin directory:

#!/usr/bin/perl -w

use strict;
$++;

my $host=`/bin/hostname`;
chomp $host;

while(1) {

open(LOAD,"/proc/loadavg") die "Couldn't open /proc/loadavg: $!\n";

my @load=split(/ /,);
close(LOAD);


print "$host: $load[0] $load[1] $load[2] at ", scalar(localtime);
print "\007";

sleep 2;
}

When you'd like to have your titlebar replaced with the name, load average, and current time of the machine you're logged into, just run tl&. It will happily go on running in the background, even if you're running an interactive program like Vim.

#19: Grabbing a screenshot without X

· Difficulty: Easy
· Application: Shell

There are plenty of screen-capture tools, but a lot of them are based on X. This leads to a
problem when running an X application would interfere with the application you wanted to grab - perhaps a game or even a Linux installer. If you use the venerable ImageMagick import command though, you can grab from an X session via the console. Simply go to a virtual terminal (Ctrl+Alt+F1 for example) and enter the following:

chvt 7; sleep 2; import -display :0.0 -window root sshot1.png; chvt 1;

The chvt command changes the virtual terminal, and the sleep command gives it a while to redraw the screen. The import command then captures the whole display and saves it to a file before the final chvt command sticks you back in the virtual terminal again. Make sure you type the whole command on one line.

This can even work on Linux installers, many of which leave a console running in the background - just load up a floppy/CD with import and the few libraries it requires for a first-rate run-anywhere screen grabber.

#20: Access your programs remotely

· Difficulty: Easy
· Application: X

If you would like to lie in bed with your Linux laptop and access your applications from your Windows machine, you can do this with SSH. You first need to enable the following setting in /etc/ssh/sshd_config:

X11Forwarding yes

We can now run The GIMP on 192.168.0.2 with:

ssh -X 192.168.0.2 gimp

#21: Making man pages useful

· Difficulty: Easy
· Application: man

If you are looking for some help on a particular subject or command, man pages are a good place to start. You normally access a man page with man , but you can also search the man page descriptions for a particular keyword. As an example, search for man pages that discuss logins:

man -k login

When you access a man page, you can also use the forward slash key to search for a particular word within the man page itself. Simply press / on your keyboard and then type in the search term.

#22: Talk to your doctor!

· Difficulty: Easy
· Application: Emacs

To say that Emacs is just a text editor is like saying that a Triumph is just a motorcycle, or the World Cup is just some four-yearly football event. True, but simplified juuuust a little bit. An example? Open the editor, press the Esc key followed by X and then enter in doctor: you will be engaged in a surreal conversation by an imaginary and underskilled psychotherapist. And if you want to waste your time in a better way

Esc-X tetris

will transform your 'editor' into the old favourite arcade game.

Does the madness stop there? No! Check out your distro's package list to see what else they've bundled for Emacs: we've got chess, Perl integration, IRC chat, French translation, HTML conversion, a Java development environment, smart compilation, and even something called a "semantic bovinator". We really haven't the first clue what that last one does, but we dare you to try it out anyway! (Please read the disclaimer first!)

#23: Generating package relationship diagrams

· Difficulty: Easy
· Application: Debian

The most critical part of the Debian system is the ability to install a package and have the dependencies satisfied automatically. If you would like a graphical representation of the relationships between these packages (this can be useful for seeing how the system fits together), you can use the Graphviz package from Debian non-free (apt-get install graphviz) and the following command:

apt-cache dotty > debian.dot

The command generated the graph file which can then be loaded into dotty:
dotty debian.dot

#24: Unmount busy drives

· Difficulty: Easy
· Application: bash

You are probably all too familiar with the situation - you are trying to unmount a drive, but keep getting told by your system that it's busy. But what application is tying it up? A quick one-liner will tell you:

lsof +D /mnt/windows

This will return the command and process ID of any tasks currently accessing the /mnt/windows directory. You can then locate them, or use the kill command to finish them off.

#25: Text file conversion

· Difficulty: Easy
· Application: recode

recode is a small utility that will save you loads of effort when using text files created on different platforms. The primary source of discontent is line breaks. In some systems, these are denoted with a line-feed character. In others, a carriage return is used. In still more systems, both are used. The end result is that if you are swapping text from one platform to another, you end up with too many or too few line breaks, and lots of strange characters besides.

However, the command parameters of recode are a little arcane, so why not combine this hack with HACK 26 in this feature, and set up some useful aliases:

alias dos2unix='recode dos/CR-LF..l1'
alias unix2win='recode l1..windows-1250'
alias unix2dos='recode l1..dos/CR-LF'

There are plenty more options for recode - it can actually convert between a whole range of character sets. Check out the man pages for more information.

#26: Listing today's files only

· Difficulty: Easy
· Application: Various

You are probably familiar with the problem. Sometime earlier in the day, you created a text file, which now is urgently required. However, you can't remember what ridiculous name you gave it, and being a typical geek, your home folder is full of 836 different files. How can you find it? Well, there are various ways, but this little tip shows you the power of pipes and joining together two powerful shell commands:

ls -al --time-style=+%D grep `date +%D`

The parameters to the ls command here cause the datestamp to be output in a particular format. The cunning bit is that the output is then passed to grep. The grep parameter is itself a command (executed because of the backticks), which substitutes the current date into the string to be matched. You could easily modify it to search specifically for other dates, times, filesizes or whatever. Combine it with HACK 26 to save typing!

#27: Avoid common mistypes and long commands

· Difficulty: Easy
· Application: Shell

The alias command is useful for setting up shortcuts for long commands, or even more clever things. From HACK 25, we could make a new command, lsnew, by doing this:

alias lsnew=" ls -al --time-style=+%D grep `date +%D` "

But there are other uses of alias. For example, common mistyping mistakes. How many times have you accidentally left out the space when changing to the parent directory? Worry no more!

alias cd..="cd .."

Alternatively, how about rewriting some existing commands?

alias ls="ls -al"

saves a few keypresses if, like us, you always want the complete list.

To have these shortcuts enabled for every session, just add the alias commands to your user .bashrc file in your home directory.

#28: Alter Mozilla's secret settings

· Difficulty: Easy
· Application: Mozilla

If you find that you would like to change how Mozilla works but the preferences offer nothing by way of clickable options that can help you, there is a special mode that you can enable in Mozilla so that you can change anything. To access it, type this into the address bar:

about:config

You can then change each setting that you are interested in by changing the Value field in the table.

Other interesting modes include general information (about:), details about plugins (about:plugins), credits information (about:credits) and some general wisdom (about:mozilla).

#29: A backdrop of stars

· Difficulty: Easy
· Application: KStars

You may already have played with KStars, but how about creating a KStars backdrop image that's updated every time you start up?

KStars can be run with the --dump switch, which dumps out an image from your startup settings, but doesn't load the GUI at all. You can create a script to run this and generate a desktop image, which will change every day (or you can just use this method to generate images).

Run KStars like this:

kstars --dump --width 1024 --height 768 --filename = ~/kstarsback.png

You can add this to a script in your ~/.kde/Autostart folder to be run at startup. Find the file in Konqueror, drag it to the desktop and select 'Set as wallpaper' to use it as a randomly generated backdrop.

#30: Open an SVG directly

· Difficulty: Easy
· Application: Inkscape

You can run Inkscape from a shell and immediately edit a graphic directly from a URL. Just type:

inkscape http://www.somehost.com/graphic.svg

Remember to save it as something else though!

#31: Editing without an editor

· Difficulty: Intermediate
· Application: Various

Very long files are often hard to manipulate with a text editor. If you need to do it regularly, chances are you'll find it much faster to use some handy command-line tools instead, like in the following examples.

To print columns eg 1 and 3 from a file file1 into file2, we can use awk:

awk '{print $1, $3}' file1 > file2

To output only characters from column 8 to column 15 of file1, we can use cut:

cut -c 8-15 file1 > file2

To replace the word word1 with the word word2 in the file file1, we can use the sed command:

sed "s/word1/word2/g" file1 > file2

This is often a quicker way to get results than even opening a text editor.

#32: Backup selected files only

· Difficulty: Intermediate
· Application: tar

Want to use tar to backup only certain files in a directory? Then you'll want to use the -T flag as follows. First, create a file with the file you want to backup:

cat >> /etc/backup.conf

# /etc/passwd
# /etc/shadow
# /etc/yp.conf
# /etc/sysctl.conf

EOF

Then run tar with the -T flag pointing to the file just created:

tar -cjf bck-etc-`date +%Y-%m-%d`.tar.bz2 -T /etc/backup.conf

Now you have your backup.

#33: Merging columns in files

· Difficulty: Intermediate
· Application: bash

While splitting columns in files is easy enough, merging them can be complicated. Below is a simple shell script that does the job:

#!/bin/sh
length=`wc -l $1 awk '{print $1}'`
count=1
[ -f $3 ] && echo "Optionally removing $3" && rm -i $3
while [ "$count" -le "$length" ] ; do
a=`head -$count $1 tail -1`
b=`head -$count $2 tail -1`
echo "$a $b" >> $3
count=`expr $count + 1`
done

Give to this script the name merge.sh and make it executable with:

chmod u+x merge.sh

Now, if you want to merge the columns of file1 and file2 into file3, it's just matter of executing
/path/to/merge.sh file1 file2 file3

where /path/to has to be replaced with the location of merge.sh in your filesystem.

#34: Case sensitivity

· Difficulty: Intermediate
· Application: bash

Despite the case of a word not making any difference to other operating systems, in Linux "Command" and "command" are different things. This can cause trouble when moving files from Windows to Linux. tr is a little shell utility that can be used to change the case of a bunch of files.

#!/bin/sh
for i in `ls -1`; do
file1=`echo $i tr [A-Z] [a-z] `
mv $i $file1 2>/dev/null
done
By executing it, FILE1 and fiLe2 will be renamed respectively file1 and file2.

#35: Macros in Emacs

· Difficulty: Intermediate
· Application: Emacs

When editing files, you will often find that the tasks are tedious and repetitive, so to spare your time you should record a macro. In Emacs, you will have to go through the following steps:

1. Press Ctrl+X to start recording.
2. Insert all the keystrokes and commands that you want
3. Press Ctrl+X to stop when you're done.

Now, you can execute that with

Ctrl -u Ctrl -x e

where is the number of times you want to execute the macro. If you enter a value of 0, the macro will be executed until the end of the file is reached. Ctrl -x e is equivalent to Ctrl -u 1 Ctrl-x e.

#36: Simple spam killing

· Difficulty: Intermediate
· Application: KMail

Spam, or unsolicited bulk email, is such a widespread problem that almost everyone has some sort of spam protection now, out of necessity. Most ISPs include spam filtering, but it isn't set to be too aggressive, and most often simply labels the spam, but lets it through (ISPs don't want to be blamed for losing your mails).

The result is that, while you may have anti-spam stuff set up on the client-side, you can make its job easier by writing a few filters to remove the spam that's already labelled as such. The label is included as a header. In KMail, you can just create a quick filter to bin your mail, or direct it to a junk folder. The exact header used will depend on the software your ISP is using, but it's usually something like X-Spam-Flag = YES for systems like SpamAssassin.

Simply create a filter in KMail, choose Match Any of the Following and type in the header details and the action you require. Apply the filter to incoming mail, and you need never be troubled by about half the volume of your spam ever again.

#37: Read OOo docs without OOo

· Difficulty: Intermediate
· Application: OpenOffice.org

Have you ever been left with an OOo document, but no OpenOffice.org in which to read it? Thought you saved it out as plain text (.txt), but used the StarOffice .sxw format instead? The text can be rescued. Firstly, the sxw file is a zip archive, so unzip it:
unzip myfile.sxw

The file you want is called 'content.xml'. Unfortunately, it's so full of xml tags it's fairly illegible, so filter them out with some Perl magic:

cat content.xml perl -p -e "s/<[^>]*>/ /g;s/\n/ /g;s/ +/ /;"

It may have lost lots of formatting, but at least it is now readable.

#38: Find and execute

· Difficulty: Intermediate
· Application: find

The find command is not only useful for finding files, but is also useful for processing the ones it finds too. Here is a quick example.

Suppose we have a lot of tarballs, and we want to find them all:

find . -name '*.gz'

will locate all the gzip archives in the current path. But suppose we want to check they are valid archives? The gunzip -vt option will do this for us, but we can cunningly combine both operations, using xargs:

find . -name '*.gz' xargs gunzip -vt

#39: Use the correct whois server

· Difficulty: Intermediate
· Application: whois

The whois command is very useful for tracking down Internet miscreants and the ISPs that are supplying them with service. Unfortunately, there are many whois servers, and if you are querying against a domain name, you often have to use one which is specific to the TLD they are using. However, there are some whois proxies that will automatically forward your query on to the correct server. One of these is available at http://whois.geektools.com/.

whois -h whois.geektools.com plop.info

#40: Where did that drive mount?

· Difficulty: Intermediate
· Application: bash

A common problem with people who have lots of mountable devices (USB drives, flash memory cards, USB key drives) is working out where that drive you just plugged in has ended up?
Practically all devices that invoke a driver - such as usb-storage - will dump some useful information in the logs. Try

dmesg grep SCSI

This will filter out recognised drive specs from the dmesg output. You'll probably turn up some text like:

SCSI device sda: 125952 512-byte hdwr sectors (64 MB)

So your device is at sda.

#41: Autorun USB devices

· Difficulty: Expert
· Application: hotplug scripts

Want to run a specific application whenever a particular device is added? The USB hotplug daemon can help you! This service is notified when USB devices are added to the system. For devices that require kernel drivers, the hotplug daemon will call a script by the same name in /etc/hotplug/usb/, for example, a script called usb-storage exists there. You can simply add your own commands to the end of this script (or better still, tag a line at the end of it to execute a script elsewhere). Then you can play a sound, autosync files, search for pictures or whatever.
For devices that don't rely on kernel drivers, a lookup table is used matching the USB product and manufacturer ID. Many distros already set this up to do something, but you can customise these scripts pretty easily. See http://jphoto.sourceforge.net/?selected=sync for an example of what can be done.

#42: Rename and resize images

· Difficulty: Expert
· Application: bash

Fond of your new camera but can't put up with the terrible names? Do you want also to prepare them for publishing on the web? No problem, a simple bash script is what you need:

#!/bin/sh
counter=1
root=mypict
resolution=400x300
for i in `ls -1 $1/*.jpg`; do
echo "Now working on $i"
convert -resize $resolution $i ${root}_${counter}.jpg
counter=`expr $counter + 1`
done

Save the script in a file called picturename.sh and make it executable with
chmod u+x picturename.sh

and store it somewhere in your path. Now, if you have a bunch of .jpg files in the directory /path/to/pictdir, all you have to do is to execute picturename.sh /path/to/pictdir
and in the current directory you'll find mypict_1.jpg, mypict_2.jpg etc, which are the resized versions of your original ones. You can change the script according to your needs, or, if you're just looking for super-simple image resizing, try looking at the mogrify command with its -geometry parameter.

#43: Secure logout

· Difficulty: Easy
· Application: bash

When you are using a console on a shared machine, or indeed, just on your own desktop, you may find that when you logout, the screen still shows a trace of who was logged in and what you were doing. A lot of distros will clear the screen, but some don't. You can solve this by editing your ~/.bash_logout file and adding the command:

clear

You can add any other useful commands here too.

#44: Transferring files without ftp or scp

· Difficulty: Easy
· Application: netcat

Need to transfer a directory to another server but do not have FTP or SCP access? Well this little trick will help out using the netcat utility. On the destination server run:

nc -l -p 1234 uncompress -c tar xvfp -

And on the sending server run:

tar cfp - /some/dir compress -c nc -w 3 [destination] 1234

Now you can transfer directories without FTP and without needing root access.

#45: Backing up a Debian package list

· Difficulty: Easy
· Application: Debian

If you are running Debian and have lost track of which packages you are running, it could be useful to get a backup of your currently installed packages. You can get a list by running:

dpkg --get-selections > debianlist.txt

This will put the entire list in debianlist.txt. You could then install the same packages on a different computer with:

dpkg --set-selections < icmp_echo_ignore_all="1" icmp_echo_ignore_all="0" icmp_echoreply_rate="10" href="mailto:your.email@ddress">your.email@ddress

Enter a passphrase for your key. This puts the secret key in ~/.ssh/id_dsa and the public key in ~/.ssh/id_dsa.pub. Now see whether you have an ssh-agent running at present:

echo $SSH_AGENT_PID

Most window managers will run it automatically if it's installed. If not, start one up:

eval $(ssh-agent)

Now, tell the agent about your key:

ssh-add

and enter your passphrase. You'll need to do this each time you log in; if you're using X, try
adding

SSH_ASKPASS=ssh-askpass ssh-add

to your .xsession file. (You may need to install ssh-askpass.) Now for each server you log into, create the directory ~/.ssh and copy the file ~/.ssh/id_dsa.pub into it as ~/.ssh/authorized_keys . If you started the ssh-agent by hand, kill it with

ssh-agent -k

when you log out.

#51: Using rsync over ssh

· Difficulty: Intermediate
· Application: Shell

Keep large directory structures in sync quickly with rsync. While tar over SSH is ideal for making remote copies of parts of a filesystem, rsync is even better suited for keeping the filesystem in sync between two machines. To run an rsync over SSH, pass it the -e switch, like this:

rsync -ave ssh greendome:/home/ftp/pub/ /home/ftp/pub/

Note the trailing / on the file spec from the source side (on greendome.) On the source spec, a trailing / tells rsync to copy the contents of the directory, but not the directory itself. To include the directory as the top level of what's being copied, leave off the /:
rsync -ave ssh bcnu:/home/six .

This will keep a copy of the ~/six/ directory on village in sync with whatever is present on bcnu:/home/six/. By default, rsync will only copy files and directories, but not remove them from the destination copy when they are removed from the source. To keep the copies exact, include the --delete flag:

rsync -ave ssh --delete greendome:~one/reports .

Now when old reports are removed from ~one/reports/ on greendome, they're also removed from ~six/public_html/reports/ on the synced version, every time this command is run. If you run a command like this in cron, leave off the v switch. This will keep the output quiet (unless rsync has a problem running, in which case you'll receive an email with the error output). Using SSH as your transport for rsync traffic has the advantage of encrypting the data over the network and also takes advantage of any trust relationships you already have established using SSH client keys.

#52: Asset scanning

· Difficulty: Intermediate
· Application: nmap

Normally, when people think of using nmap, they assume it's used to conduct some sort of nefarious network reconnaissance in preparation for an attack. But as with all powerful tools, nmap can be made to wear a white hat, as it's useful for far more than breaking into networks. For example, simple TCP connect scans can be conducted without needing root privileges:

nmap rigel

nmap can also scan ranges of IP addresses by specifying the range or using CIDR notation:

nmap 192.168.0.1-254
nmap 192.168.0.0/24

nmap can provide much more information if it is run as root. When run as root, it can use special packets to determine the operating system of the remote machine by using the -O flag. Additionally, you can do half-open TCP scanning by using the -sS flag. When doing a half-open scan, nmap will send a SYN packet to the remote host and wait to receive the ACK from it; if it receives an ACK, it knows that the port is open.

This is different from a normal three-way TCP handshake, where the client will send a SYN packet and then send an ACK back to the server once it has received the initial server ACK. Attackers typically use this option to avoid having their scans logged on the remote machine.

nmap -sS -O rigel

Starting nmap V. 3.00 ( www.insecure.org/nmap/ )
Interesting ports on rigel.nnc (192.168.0.61):
(The 1578 ports scanned but not shown below are in state: filtered)
Port State Service
7/tcp open echo
9/tcp open discard
13/tcp open daytime
19/tcp open chargen
21/tcp open ftp
22/tcp open ssh
23/tcp open telnet
25/tcp open smtp
37/tcp open time
79/tcp open finger
111/tcp open sunrpc
512/tcp open exec
513/tcp open login
514/tcp open shell
587/tcp open submission
7100/tcp open font-service
32771/tcp open sometimes-rpc5
32772/tcp open sometimes-rpc7
32773/tcp open sometimes-rpc9
32774/tcp open sometimes-rpc11
32777/tcp open sometimes-rpc17
Remote operating system guess: Solaris 9 Beta through Release on SPARC
Uptime 44.051 days (since Sat Nov 1 16:41:50 2003)
Nmap run completed -- 1 IP address (1 host up) scanned in 166 seconds

With OS detection enabled, nmap has confirmed that the OS is Solaris, but now you also know that it's probably Version 9 running on a SPARC processor.

One powerful feature that can be used to help keep track of your network is nmap's XML output capabilities. This is activated by using the -oX command-line switch, like this:

nmap -sS -O -oX scandata.xml rigel

This is especially useful when scanning a range of IP addresses or your whole network, because you can put all the information gathered from the scan into a single XML file that can be parsed and inserted into a database. Here's what an XML entry for an open port looks like:
/**
/**

/**
/**
/**

**/
Note: remove the "/**" when you make the XML =D

nmap is a powerful tool. By using its XML output capabilities, a little bit of scripting, and a database, you can create an even more powerful tool that can monitor your network for unauthorized services and machines.

#53: Backup your bootsector

· Difficulty Expert
· Application Shell

Messing with bootloaders, dual-booting and various other scary processes can leave you with a messed up bootsector. Why not create a backup of it while you can:

dd if=/dev/hda of=bootsector.img bs=512 count=1

Obviously you should change the device to reflect your boot drive (it may be sda for SCSI). Also, be very careful not to get things the wrong way around - you can easily damage your drive! To restore use:

dd if=bootsector.img of=/dev/hda


#54: Protect log files

· Difficulty: Expert
· Application: Various

During an intrusion, an attacker will more than likely leave telltale signs of his actions in various system logs: a valuable audit trail that should be protected. Without reliable logs, it can be very difficult to figure out how the attacker got in, or where the attack came from. This info is crucial in analysing the incident and then responding to it by contacting the appropriate parties involved. But, if the break-in is successful, what's to stop him from removing the traces of his misbehaviour?

This is where file attributes come in to save the day (or at least make it a little better). Both Linux and the BSDs have the ability to assign extra attributes to files and directories. This is different from the standard Unix permissions scheme in that the attributes set on a file apply universally to all users of the system, and they affect file accesses at a much deeper level than file permissions or ACLs.

In Linux, you can see and modify the attributes that are set for a given file by using the lsattr and chattr commands, respectively. At the time of this writing, file attributes in Linux are available only when using the ext2 and ext3 filesystems. There are also kernel patches available for attribute support in XFS and ReiserFS. One useful attribute for protecting log files is append-only. When this attribute is set, the file cannot be deleted, and writes are only allowed to append to the end of the file.

To set the append-only flag under Linux, run this command:

chattr +a filename

See how the +a attribute works: create a file and set its append-only attribute:

touch /var/log/logfile
echo "append-only not set" > /var/log/logfile
chattr +a /var/log/logfile
echo "append-only set" > /var/log/logfile
bash: /var/log/logfile: Operation not permitted

The second write attempt failed, since it would overwrite the file. However, appending to the end of the file is still permitted:

echo "appending to file" >> /var/log/logfile
cat /var/log/logfile
append-only not set
appending to file

Obviously, an intruder who has gained root privileges could realise that file attributes are being used and just remove the append-only flag from our logs by running chattr -a. To prevent this, we need to disable the ability to remove the append-only attribute. To accomplish this under Linux, use its capabilities mechanism.

The Linux capabilities model divides up the privileges given to the all-powerful root account and allows you to selectively disable them. In order to prevent a user from removing the append-only attribute from a file, we need to remove the CAP_LINUX_IMMUTABLE capability. When present in the running system, this capability allows the append-only attribute to be modified.

To modify the set of capabilities available to the system, we will use a simple utility called lcap (http://packetstormsecurity.org/linux/admin/lcap-0.0.3.tar.bz2).

To unpack and compile the tool, run this command:

tar xvfj lcap-0.0.3.tar.bz2 && cd lcap-0.0.3 && make

Then, to disallow modification of the append-only flag, run:

./lcap CAP_LINUX_IMMUTABLE
./lcap CAP_SYS_RAWIO

The first command removes the ability to change the append-only flag, and the second removes the ability to do raw I/O. This is needed so that the protected files cannot be modified by accessing the block device they reside on. It also prevents access to /dev/mem and /dev/kmem, which would provide a loophole for an intruder to reinstate the CAP_LINUX_IMMUTABLE capability. To remove these capabilities at boot, add the previous two commands to your system startup scripts (eg /etc/rc.local). You should ensure that capabilities are removed late in the boot order, to prevent problems with other startup scripts. Once lcap has removed kernel capabilities, they can be reinstated only by rebooting the system.

Before doing this, you should be aware that adding append-only flags to your log files will most likely cause log rotation scripts to fail. However, doing this will greatly enhance the security of your audit trail, which will prove invaluable in the event of an incident.

#55: Automatically encrypted connections

· Difficulty: Expert
· Application: FreeS/WAN

One particularly cool feature supported by FreeS/WAN is opportunistic encryption with other hosts running FreeS/WAN. This allows FreeS/WAN to transparently encrypt traffic between all hosts that also support opportunistic encryption. To do this, each host must have a public key generated to use with FreeS/WAN. This key can then be stored in a DNS TXT record for that host. When a host that is set up for opportunistic encryption wishes to initiate an encrypted connection with another host, it will look up the host's public key through DNS and use it to initiate the connection.

To begin, you'll need to generate a key for each host that you want to use this feature with. You can do that by running the following command:

ipsec newhostkey --output /tmp/`hostname`.key

Now you'll need to add the contents of the file that was created by that command to /etc/ipsec.secrets:

cat /tmp/`hostname`.key >> /etc/ipsec.secrets

Next, you'll need to generate a TXT record to put into your DNS zone. You can do this by running a command similar to this one:

ipsec showhostkey --txt @colossus.nnc

Now add this record to your zone and reload it. You can verify that DNS is working correctly by running this command:

ipsec verify

Checking your system to see if IPsec got installed and started correctly
Version check and ipsec on-path [OK]
Checking for KLIPS support in kernel [OK]
Checking for RSA private key (/etc/ipsec.secrets) [OK]
Checking that pluto is running [OK]
DNS checks.

Looking for TXT in forward map: colossus [OK]
Does the machine have at least one non-private address [OK]

Now just restart FreeS/WAN - you should now be able to connect to any other host that supports opportunistic encryption. But what if other hosts want to connect to you? To allow this, you'll need to create a TXT record for your machine in your reverse DNS zone.

You can generate the record by running a command similar to this:

ipsec showhostkey --txt 192.168.0.64

Add this record to the reverse zone for your subnet, and other machines will be able to initiate opportunistic encryption with your machine. With opportunistic encryption in use, all traffic between the hosts will be automatically encrypted, protecting all services simultaneously.

#56: Eliminate suid binaries

· Difficulty: Intermediate
· Application: find

If your server has more shell users than yourself, you should regularly audit the setuid and setgid binaries on your system. Chances are you'll be surprised at just how many you'll find. Here's one command for finding all of the files with a setuid or setgid bit set:

find / -perm +6000 -type f -exec ls -ld {} \; > setuid.txt &

This will create a file called setuid.txt that contains the details of all of the matching files present on your system. To remove the s bits of any tools that you don't use, type:
chmod a-s program

#57: Mac filtering Host AP

· Difficulty: Expert
· Application: iwpriv

While you can certainly perform MAC filtering at the link layer using iptables or ebtables, it is far safer to let Host AP do it for you. This not only blocks traffic that is destined for your network, but also prevents miscreants from even associating with your station. This helps to preclude the possibility that someone could still cause trouble for your other associated wireless clients, even if they don't have further network access.

When using MAC filtering, most people make a list of wireless devices that they wish to allow, and then deny all others. This is done using the iwpriv command.

iwpriv wlan0 addmac 00:30:65:23:17:05
iwpriv wlan0 addmac 00:40:96:aa:99:fd
...
iwpriv wlan0 maccmd 1
iwpriv wlan0 maccmd 4

The addmac directive adds a MAC address to the internal table. You can add as many MAC addresses as you like to the table by issuing more addmac commands. You then need to tell Host AP what to do with the table you've built. The maccmd 1 command tells Host AP to use the table as an "allowed" list, and to deny all other MAC addresses from associating. Finally, the maccmd 4 command boots off all associated clients, forcing them to reassociate. This happens automatically for clients listed in the table, but everyone else attempting to associate will be denied.

Sometimes, you only need to ban a troublemaker or two, rather than set an explicit policy of permitted devices. If you need to ban a couple of specific MAC address but allow all others, try this:

iwpriv wlan0 addmac 00:30:65:fa:ca:de
iwpriv wlan0 maccmd 2
iwpriv wlan0 kickmac 00:30:65:fa:ca:de

As before, you can use addmac as many times as you like. The maccmd 2 command sets the policy to "deny," and kickmac boots the specified MAC immediately, if it happens to be associated. This is probably nicer than booting everybody and making them reassociate just to ban one troublemaker. Incidentally, if you'd like to remove MAC filtering altogether, try maccmd 0.

If you make a mistake typing in a MAC address, you can use the delmac command just as you would addmac, and it (predictably) deletes the given MAC address from the table. Should you ever need to flush the current MAC table entirely but keep the current policy, use this command:

iwpriv wlan0 maccmd 3

Finally, you can view the running MAC table by using /proc:

cat /proc/net/hostap/wlan0/ap_control

The iwpriv program manipulates the running Host AP driver, but doesn't preserve settings across reboots. Once you are happy with the contents of your MAC filtering table, be sure to put the relevant commands in an rc script to run at boot time.

Note that even unassociated clients can still listen to network traffic, so MAC filtering actually does very little to prevent eavesdropping. To combat passive listening techniques, you will need to encrypt your data.

Read the rest of this entry...

Bookmark and Share My Zimbio http://www.wikio.com