Archive

Archive for the ‘Linux OS’ Category

How to monitor Java/Tomcat using check_jmx through firewall/Amazon VPC – Nagios/Icinga

After lot of research over search engine,I figured out the JMX monitoring over firewall/ Amazon VPC. I would like to describe here all bunch of informations which i learnt through blogs and forums.I hope, it will help some one who looking for the same kind of solution and I belive, This post can reduce his/her time and effort rather thn searching with lot of effort.

After completing setup of Nagios/Icinga, We can use check_jmx plugin to monitor Tomcat/java process, Thread Count, Garbage Collector and Heap Memory Usage.

Make sure your rmiregistry port is open and listening (here I used port 9696 for rmiregistry). if you are using VPC, enable the port access to the monitoring system by using security groups option at AWS console.

# nmap -p 9696 localhost

Starting Nmap 5.51 ( http://nmap.org ) at 2014-04-17 11:10 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000087s latency).
PORT     STATE SERVICE
9696/tcp open  rmiregistry

Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds

I always keep iptables enable for system, since it will proctect your system against both malicious users and software such as viruses/worms.

Add rmiregistry in iptables and /etc/services as well.

# cat /etc/services | grep rmiregistry
rmiregistry     9696/tcp                # RMI Registry

# iptables -L
target     prot opt source               destination
ACCEPT     tcp  —  anywhere             anywhere            state NEW tcp dpt:rmiregistry

Make sure your iptables added and enabled with chkconfig.

# chkconfig iptables on

Disable selinux,

# getenforce
Disabled

On client machine (Tomcat/java machine) edit your nrpe and add the commands as shown below, so that we can just execute the nrpe command from monitoring system over the particular port which you have given access to monitoring system through vpc/firewall.

# vi /etc/nagios/nrpe.cfg

#### JMX Monitoring ####

command[check_jmx_threadcount]=/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://”hostname”:9696/jmxrmi -O java.lang:type=Threading -A ThreadCount -K “” Total

command[check_jmx_heap]=/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://”hostname”:9696/jmxrmi -O java.lang:type=Memory -A HeapMemoryUsage -K used -I HeapMemoryUsage -J used -vvvv -w 102400 -c 81290

command[check_jmx_garbagecollector]=/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://”hostname”:9696/jmxrmi -O “java.lang:type=GarbageCollector,name=Copy” -A LastGcInfo -K duration -w 3500 -c 4000 -u ms

Here you go to add check_jmx commands on your monitoring system,which you already defined with the client machine.

edit commands.cfg and add following to monitor heap,thread-count and garbage collector.

# ‘check_jmx’ command definition

define command{
command_name    check_jmx_heap
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$  -c check_jmx_heap
}

define command{
command_name    check_jmx_current_threadcount
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$  -c check_jmx_threadcount
}

define command{
command_name    check_jmx_garbagecollector
command_line    $USER1$/check_nrpe -H $HOSTADDRESS$  -c check_jmx_garbagecollector
}

Additional Note  : –

If you have space in arguments , normally check_jmx will not handle the request.
For example “‘java.lang:type=GarbageCollector,name=PS Scavenge'”  so you need to edit your check_jmx and update double quotes to $@ fix this.

Change in check_jmx (last line) From java -cp $RDIR/jmxquery.jar org.nagios.JMXQuery $@   to  java -cp $RDIR/jmxquery.jar org.nagios.JMXQuery “$@”

Updated check_jmx shown below,

# cat /usr/lib64/nagios/plugins/check_jmx
#!/bin/sh
#
# Nagios plugin to monitor Java JMX (http://java.sun.com/jmx)attributes.
#
RDIR=`dirname $0`
java -cp $RDIR/jmxquery.jar org.nagios.JMXQuery “$@”

Result :-

jmx

How to Install and Configure Icinga on CentOS/Fedora/RHEL

So lets get started.  I’m building on the system with the usual requirements:

# yum install -y rsync vim wget curl git zip unzip mlocate make

# Core packages Icinga needs

# yum install httpd gcc glibc glibc-common gd gd-devel# yum install libjpeg libjpeg-devel libpng libpng-devel

# yum install -y net-snmp net-snmp-devel net-snmp-utils

# yum install mysql mysql-server libdbi libdbi-devel libdbi-drivers \   libdbi-dbd-mysql

 Let’s build a download folder and get all the software from Icinga we will need.

# cd /opt

Download the latest core package

#wget http://sourceforge.net/projects/icinga/files/latest/download?source=files

This is the web front end its at version 1.8.1

#wget http://sourceforge.net/projects/icinga/files/icinga-web/1.8.1/icinga-web-1.8.1.tar.gz/download

Grab the nagios plugins too since we are here

#wget http://sourceforge.net/projects/nagiosplug/files/latest/download?source=files

Get the latest NRPE module

# wget http://sourceforge.net/projects/nagios/files/nrpe-2.x/nrpe-2.14/nrpe-2.14.tar.gz/download

Create a new Icinga user and give it a password:

/usr/sbin/useradd -m icinga

passwd icinga

Add the Icinga group:

/usr/sbin/groupadd icinga

For sending commands to the classic interface you’ll need to add the below:

/usr/sbin/groupadd icinga-cmd

/usr/sbin/usermod -a -G icinga-cmd icinga

/usr/sbin/usermod -a -G icinga-cmd apache

Compile and install Icinga:

tar xvf icinga-1.8.4.tar.gz

cd icinga-1.8.4

Run the configure script and make the install:

./configure –with-command-group=icinga-cmd

make all

make fullinstall

# Make install-config is only needed on a new install. Don’t do this if its an upgrade!

make install-config

Customizing the configuration:

Edit the /usr/local/icinga/etc/objects/contacts.cfg config file with your favorite editor and change the email address associated with the icingaadmin contact definition to the address you’d like to use for receiving alerts.

vim /usr/local/icinga/etc/objects/contacts.cfg

#email                           icinga@localhost       ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

email                           user-name@your-email.com       ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

Configure the classic web interface:

make cgis

make install-cgis

make install-html

Install the Apache config:

make install-webconf

Create an icingaadmin account for logging into the Icinga classic web interface. If you want to change it later, use the same command. Remember the password you assign to this account – you’ll need it later.

htpasswd -c /usr/local/icinga/etc/htpasswd.users icingaadmin

If you want to change it later or add another user:

htpasswd /usr/local/icinga/etc/htpasswd.users

Add Apache to startup and restart / start:

chkconfig httpd on

/etc/init.d/httpd restart

Extract and configure nagios plugins:

cd /opt/installs

tar -xvf nagios-plugins-1.4.16.tar.gz

cd nagios-plugins-1.4.16

./configure –prefix=/usr/local/icinga –with-cgiurl=/icinga/cgi-bin –with-nagios-user=icinga –with-nagios-group=icinga

make

make install

Extract and configure NRPE Plugin:

cd /opt/installs

tar -xvf nrpe-2.14.tar.gz

cd nrpe-2.14

./configure –with-ssl –with-nrpe-user=icinga –with-nrpe-group=icinga –with-nagios-user=icinga –with-nagios-group=icinga –libexecdir=/usr/local/icinga/libexec/ –bindir=/usr/local/icinga/bin/

make all && make install

RHEL and derived distributions like Fedora and CentOS are shipped with activated SELinux (Security Enhanced Linux) running in “enforcing” mode. This may lead to “Internal Server Error” messages when you try to invoke the Icinga-CGIs.

Check if SELinux runs in enforcing mode:

getenforce

Set SELinux in “permissive” mode:

setenforce 0

To make this change permanent you have to adjust this setting in /etc/selinux/config and restart the system.

Instead of deactivating SELinux or setting it into permissive mode you can use the following commands to run the CGIs in enforcing/targeted mode:

Add Icinga Core to startup:

chkconfig –add icinga

chkconfig icinga on

Verify the sample config:

/usr/local/icinga/bin/icinga -v /usr/local/icinga/etc/icinga.cfg

Total Warnings: 0

Total Errors:   0

Instead of specifying the paths to binary and config file you can issue:

/etc/init.d/icinga show-errors

Start up icinga core:

/etc/init.d/icinga start

Login to the classic interface with the icingaadmin user:

# Open a browser

http://yourdomain/icinga

Setup your web GUI account / password:

# Default login icingaadmin:icingaadmin – new users can be added with

htpasswd /etc/icinga/passwd youradmin

Set MySQL to start on boot up:

chkconfig mysqld on

Create Database, User, Grants:

# mysql -u root -p

mysql> CREATE DATABASE icinga;

GRANT USAGE ON icinga.* TO ‘icinga’@’localhost’

IDENTIFIED BY ‘icinga’

WITH MAX_QUERIES_PER_HOUR 0

MAX_CONNECTIONS_PER_HOUR 0

MAX_UPDATES_PER_HOUR 0;

GRANT SELECT, INSERT, UPDATE, DELETE, DROP, CREATE VIEW, INDEX, EXECUTE

ON icinga.* TO ‘icinga’@’localhost’;

FLUSH PRIVILEGES;

quit

Import database scheme for MySQL:

# mysql -u root -p icinga < /opt/icinga-1.9.3/module/idoutils/db/mysql/mysql.sql

You are done with the Icinga-Core and Classic Web Interface. Open icinga page in your browser and check it out the console.

Setting Up the Amazon EC2 Command Line Interface Tools with Linux

Introduction

This short guide presents an example of the steps on how to install the Amazon API Tools and the Amazon AMI Tools on the Linux (CentOS 6) platform.

The Amazon API Tools and the Amazon AMI Tools are packages of command-line scripts to the AWS web service used to manage and bundle instances. Although there are many other Developer Tools provided by Amazon and the AWS development community to help developers create and manage applications built on AWS, these two are the most commonly used to manage EC2 instances.

The API tools serve as the client interface to the Amazon EC2 web service. Use these tools to register and launch instances, manipulate security groups, and more.

The Amazon EC2 AMI Tools are command-line utilities to help bundle an Amazon Machine Image (AMI), create an AMI from an existing machine or installed volume, and upload a bundled AMI to Amazon S3.

Install Amazon EC2 Tools (Linux)

Use the following steps to install the Amazon API Tools and the Amazon AMI Tools on the Linux platform.

  1. Shell Login Script

Add the following environment variables to your shell login script (i.e. /root/.bashrc). Make any necessary changes for your specific environment by replacing AWS_ACCOUNT_NUMBER, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY with your AWS account number and security credentials. Make certain to remove the < and > characters when providing your values.

# cp /root/.bashrc /root/.bashrc.backup 

# vi /root/.bashrc

 

export EC2_BASE=/opt/ec2

export EC2_HOME=$EC2_BASE/tools

export EC2_PRIVATE_KEY=$EC2_BASE/certificates/ec2-pk.pem

export EC2_CERT=$EC2_BASE/certificates/ec2-cert.pem

export EC2_URL=https://ec2.amazonaws.com

export AWS_ACCOUNT_NUMBER=<999999999999>

export AWS_ACCESS_KEY_ID=<your_access_key_id>

export AWS_SECRET_ACCESS_KEY=<your_secret_access_key>

export PATH=$PATH:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:$EC2_HOME/bin

export JAVA_HOME=/usr

# source ~/.bashrc

  • [EC2_BASE] — Base directory for all Amazon EC2 related components (i.e. tools and certificates). On Linux, I commonly use /opt/ec2
  • [EC2_HOME] — Installation directory for the Amazon API Tools and the Amazon AMI Tools. This directory should be created as a sub-directory in EC2_BASE. This environment variable will be used by all of the command-line tools in both packages
  • [EC2_PRIVATE_KEY] and [EC2_CERT] — EC2 private key file and EC2 certificate file. I typically rename the X.509 certificate files as follows: private key file (ec2-pk.pem) and certificate file (ec2-cert.pem)
  • [EC2_URL] — Specifies a Region endpoint for your environment. Amazon uses this environment variable (or the –url command-line flag) to choose a default Region when running any of the command-line tools. The default Region for the endpoint used in the example shell login script above is us-east-1 and is the one I use based on my geographic location near the east coast
  • [AWS_ACCOUNT_NUMBER] — AWS account number (sometimes called the account id) which shows up when you go to the Account Activity area of the AWS web site. The account number is a 12 digit number that appears in the top-right of the Account Activity page and is in the form 9999-9999-9999. When you use the account number in the context of the APIs, you should leave out the hyphens and just enter the 12 digits
  • [AWS_ACCESS_KEY_ID] and [AWS_SECRET_ACCESS_KEY] — The AWS Access Key and Secret Key serve the purpose of ID and Password to access Amazon S3. Navigate to Security Credentials, click on the Access Keys tab under Access Credentials to create or view your Access Key ID and Secret Access Key
  1. Install Java

The EC2 API Tools and Amazon EC2 AMI Tools are Java based. If you don’t already have a version of Java installed, do so now.

# yum -y install java-1.6.0-openjdk

The JAVA_HOME environment variable should be set to the appropriate home directory in your shell login script (i.e. /root/.bashrc) which was handled in the previous step. Verify the JAVA_HOME environment variable is set for the current shell and confirm that Java is installed correctly.

# echo $JAVA_HOME/usr

 

# java -version

java version “1.6.0_24”

OpenJDK Runtime Environment (IcedTea6 1.11.3) (rhel-1.48.1.11.3.el6_2-x86_64)

OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

  1. Install the Amazon EC2 Tools

Download the Amazon EC2 API Tools.

# mkdir -p $EC2_HOME# curl -o /tmp/ec2-api-tools.zip http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip

# unzip /tmp/ec2-api-tools.zip -d /tmp

# cp -r /tmp/ec2-api-tools-*/* $EC2_HOME

Download the Amazon EC2 AMI Tools to the EC2 image.

# curl -o /tmp/ec2-ami-tools.zip http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip 

# unzip /tmp/ec2-ami-tools.zip -d /tmp

# cp -rf /tmp/ec2-ami-tools-*/* $EC2_HOME

  1. EC2 Private Key File and EC2 Certificate File

Copy your X.509 Certificate (private key file and certificate file) to appropriate directory. For the purpose of this example, I will be renaming my private key file from pk-2L7LZYRTNEAC4KGZMPPZWAOZ4KYCTCA4.pem to ec2-pk.pem and my certificate file from cert-2L7LZYRTNEAC4KGZMPPZWAOZ4KYCTCA4.pem to ec2-cert.pem.

# mkdir -p $EC2_BASE/certificates# cp pk- xxxxxxxxxxxxxxxxxxxxxxxxxxxx.pem $EC2_BASE/certificates/ec2-pk.pem

# cp cert- xxxxxxxxxxxxxxxxxxxxxxxxxxxx.pem $EC2_BASE/certificates/ec2-cert.pem

  1. Verify Amazon EC2 Tools

Verify that the Amazon EC2 Tools have been installed correctly.

Test the ec2-describe-regions script which is found in the EC2 API Tools to list the regions you have access to.

# ec2-describe-regions | sortREGION ap-northeast-1     ec2.ap-northeast-1.amazonaws.com

REGION ap-southeast-1     ec2.ap-southeast-1.amazonaws.com

REGION eu-west-1         ec2.eu-west-1.amazonaws.com

REGION sa-east-1         ec2.sa-east-1.amazonaws.com

REGION us-east-1         ec2.us-east-1.amazonaws.com

REGION us-west-1       ec2.us-west-1.amazonaws.com

REGION us-west-2         ec2.us-west-2.amazonaws.com

 

If you receive the following error message running any of the EC2 tools, make certain that the date and time are set correctly.# ec2-describe-regions

Client.InvalidSecurity: Request has expired

# date

Thu Jun 7 02:53:27 EDT 2012

 

/* The above date is 17 days off */

 

# date -s “24 JUN 2012 13:36:00”

Sun Jun 24 13:36:00 EDT 2012

AWS Command Line Interface (CLI)

# ec2-describe-regions | sort

Create a AMI from Existing Instance with no reboot

# ec2-create-image i- xxxxxx –name CFE-AWS-CLI-AMI –no-reboot

Launching an Instance

EC2-Classic

#ec2-run-instances ami- xxxxxx -t t1.micro -k cbstest -g sg- xxxxxx

EC2-VPC

#ec2-run-instances ami- xxxxxx -t t1.micro -s subnet-f5994794 -k cbstest -g sg- xxxxxx

ec2-create-subnet

#ec2-create-subnet -c vpc_id -i cidr [ -z zone ]

****** We need to make sure, Security group and subnet are belongs to same network (vpc) when we run instance from CLI ******

ec2-associate-route-table

#ec2-associate-route-table route_table_id -s subnet_id

ec2-attach-vpn-gateway

#ec2-attach-vpn-gateway vpn_gateway_id -c vpc_id

 

ec2-assign-private-ip-addresses

#ec2-assign-private-ip-addresses –network-interface interface_id {[–secondary-private-ip-address-count count] | [–secondary-private-ip-address ip_address]}

 

ec2-allocate-address

#ec2-allocate-address [-d domain]

ec2-create-route-table

#ec2-create-route-table vpc_id

 

ec2-create-vpc

#ec2-create-vpc cidr [tenancy]

 

This example command creates a security group named WebServerSG for the VPC with the ID vpc- xxxxxx.

#ec2-create-group WebServerSG -d “Web Servers” -c vpc-xxxxxx

GROUP sg-xxxxxx   WebServerSG   Web Servers

 

 

Log Files in Linux

If you are serious about learning Linux then one aspect you will want to familiarize yourself with is log files. This concept will help you to understand why when you go to a mailing list with a problem and, when someone asks you the contents of a particular log file, you are able to offer enough information to help solve your problem. Log files are very good for helping you deduce what is going wrong with a system. There are, however, a lot of log files to wade through. That’s where I come in. In this article I am going to show you the first places to look when you have problems with a Linux system. I won’t cover all of the log files (at least yet), but I will get you started on what will hopefully become a long history of too much information.

dmesg

When I have a problem (or when I am attaching a usb device) one of the first places I go is the dmesg command. The dmesg command prints out the kernel keyring buffer. The information you will get will be all of the information you do not see when your system is booting. This is a great place to get information (low level) on your hardware. On one of my laptops, I run dmesg and near the top I see:

Phoenix BIOS detected: BIOS may corrupt low RAM, working it around.
last_pfn
= 0x7f6d0 max_arch_pfn = 0x100000
x86
PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
kernel
direct mapping tables up to 38000000 @ 10000-15000
Using
x86 segment limits to approximate NX protection
RAMDISK:
37c6a000 - 37fef4a2

From that I can tell I have a Phoenix bios. Pretty obvious. A little later I see:

Security Framework initialized
SELinux: 
Initializing.
SELinux: 
Starting in permissive mode

Now I know Security Enhanced Linux is starting, in permissive mode, at bootup. And even further on down the line I see:

CPU1: Intel(R) Pentium(R) Dual  CPU  T2390  @ 1.86GHz stepping 0d
checking
TSC synchronization [CPU#0 -> CPU#1]: passed.
Brought
up 2 CPUs
Total
of 2 processors activated (7447.76 BogoMIPS)

The above shows me information about my CPU. Good to know.

The most important information you will probably get from dmesg is the information regarding attached USB devices. When you plug in a USB device you will need to know what special device this is attached to so you can mount it. This will occur at the bottom of the dmesg command output.

The output of dmesg is quite long and will scroll by very quickly. When I run this command I always pipe it through the less command like so:

dmesg | less

This way I can view the output one page at a time.

/var/log

This special directory is the Mac Daddy of information gathering. Fire up a terminal window and issue the command ls /var/log/ and see what it contains. You see, included in this listing, such log files and log directories as:

  • boot.log – boot information
  • cron – cron logs
  • cups – directory of all printing logs
  • httpd – Apache logs
  • mail – Mail server logs
  • maillog – The mail log
  • messages – Post-boot kernel information
  • secure – Security log
  • Xorg.0.log – X Server log

You can see the listing of log files in the /var/log directory, but in order to actually read the log files you have to be the root user (or use sudo).

Viewing with tail

One of the handiest methods of viewing log files is using the tail command. What tail does is follow the running output of a log file. For instance if I want to follow my /var/log/secure log to watch for security issues I would enter the command tail -f /var/log/secure. The f switch tells tail to follow. If  you don’t add the f switch tail will just list the output all at once (as if you just issued less /var/log/secure.)

Final Thougths

There is so much information to be gained from reading log files. The Linux operating system makes reading log files easy, once you know which log file does what. Take a poke around /var/log to find out exactly what you have and where you need to look for the problem you are having.

How to remove unused Kernels in Fedora/Centos/Redhat Linux

I have a Fedora 16 system . Through kernel updates now I have several Kernel options when I boot the machine.  So thought of removing old kernels.I like to keep the boot process clean.
there are few way to remove the kernels that are no longer current.  the process is fairly simple,

To remove it , run as root

# rpm -e kernel-version

in a terminal (Main Menu > System Tools > Terminal) where version is the full release number. Enter

# rpm -q  kernel

to get the installed kernels.

or

Run this command as root/super user,

# package-cleanup –oldkernels

After loading packages you will be promoting to enter an option,as below…

Dependencies Resolved

=======================================================================================================================================================================
Package                                           Arch                         Version                                        Repository                         Size
=======================================================================================================================================================================
Removing:
kernel                                            i686                         3.1.5-2.fc16                                   @updates                           92 M
kernel-devel                                      i686                         3.1.5-2.fc16                                   @updates                           26 M
Removing for dependencies:
kmod-wl-3.1.5-2.fc16.i686                         i686                         5.100.82.112-1.fc16.5                          installed                         2.5 M

Transaction Summary
=======================================================================================================================================================================
Remove        3 Packages

Installed size: 121 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Check
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
Erasing    : kmod-wl-3.1.5-2.fc16.i686-5.100.82.112-1.fc16.5.i686                                                                                                1/3
Erasing    : kernel-3.1.5-2.fc16.i686                                                                                                                            2/3
Erasing    : kernel-devel-3.1.5-2.fc16.i686                                                                                                                      3/3

Removed:
kernel.i686 0:3.1.5-2.fc16                                                      kernel-devel.i686 0:3.1.5-2.fc16

Dependency Removed:
kmod-wl-3.1.5-2.fc16.i686.i686 0:5.100.82.112-1.fc16.5

Complete!

Thats it. you have removed your old/unused kernels.

RAID – why should I want / need it ?

It depends…

First, if you dont have anything critically important on your computer, and you are satisfied with its speed, you dont need raid. If your computer is fast enough but has critical data, and you do backups regularly No.

Here is where it gets tricky: Raid 0 is just striping, it will give you extra speed in the hard drive reads/writes, BUT you lose reliability – there is no redundancy which is fine as long as you have good backups, and backup religiously(as in once a day, every day)!

IF your data is any kind of important, you need to back it up if you want to run Raid 0!

How important is the computer and its information? How critically important is the data? How fast do you need it? If you have a business(or just family data) with critical data on it you may prefer raid 1(duplication of data-high cost, fast read but slower write performance), Raid 1 doubles your HD costs, but increases the computers’ reliability.

Raid 5 is the next cheapest/reliable array, you need at least 3 disks for a Raid 5 array, but 4, or 5 disks are more reliable, but costs much more than just a single drive.

How to zip / unzip a folder in linux

My friend keep forgetting the command to zip & extract  a folder in Linux , he use to ask me atleast once in a week. bcoz of him I am writing here , how to zip and unzip in linux.

>>       zip – package and compress (archive) files

# zip silence silence.zip

or more concisely

# zip -r silence.zip silence

>> unzip – list, test and extract compressed files in a ZIP archive

To  use unzip to extract all members of the archive letters.zip into the current directory and subdirectories below it, creating any subdirectories as
necessary:

# unzip silence.zip

To extract all members of letters.zip into the current directory only:

# unzip -j silence.zip

mydear friend , hope this link will be bookmarked by you 😉

How to format a USB pendrive in Linux ?

Here some more recent RnD done on my end and it will help you to make out easily while you trying to do formating your usb in linux.

Type the following command to find out USB pen partition name:

[root@thiyag]# df
Filesystem     1K-blocks     Used Available Use% Mounted on
rootfs          51606140  6148584  44933428  13% /
devtmpfs          953540        0    953540   0% /dev
tmpfs             961808     1000    960808   1% /dev/shm
tmpfs             961808    40612    921196   5% /run
/dev/sda3       51606140  6148584  44933428  13% /
tmpfs             961808    40612    921196   5% /run
tmpfs             961808        0    961808   0% /sys/fs/cgroup
tmpfs             961808        0    961808   0% /media
/dev/sda5      251747076 83478108 155480900  35% /home
/dev/sda1         495844    78680    391564  17% /boot
/dev/sdb1        7898008        4   7898004   1% /media/2E3E-D0DE

We can use fdisk as well to get to know the USB partition,

[root@thiyag]# fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x9c68a144

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048     8759295     3866624   82  Linux swap / Solaris
/dev/sda3         8759296   113616895    52428800   83  Linux
/dev/sda4       113616896   625142447   255762776    5  Extended
/dev/sda5       113618944   625141759   255761408   83  Linux

Disk /dev/sdb: 8103 MB, 8103395328 bytes
196 heads, 32 sectors/track, 2523 cylinders, total 15826944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc3072e18

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *          32    15826943     7913456    7  HPFS/NTFS/exFAT

Once identified the partition name type the following command to format the usb pen in Linux
(caution you must select correct usb partition name, otherwise you will loss all the data on hard disk)

[root@thiyag]# mkfs.vfat /dev/sdb1
mkfs.vfat 3.0.12 (29 Oct 2011)
mkfs.vfat: /dev/sdb1 contains a mounted file system.

here we are getting an error as, “mkfs.vfat: /dev/sdb1 contains a mounted file system.”
So it asking us to unmount before we format the USB
here we go for unmount the mounted usb.

[root@thiyag]# umount /dev/sdb1

We havn’t get any error, so we unmounted it 😉

finally, use the following command to format your pendrive/usb drive.

# mkfs.ext3 /dev/sdb1

To format as VFAT/FAT32 file system type the following command:

[root@thiyag]# mkfs.vfat /dev/sdb1
mkfs.vfat 3.0.12 (29 Oct 2011)
[root@thiyag]#

hope it works for you too ! 🙂

Wireless (broadcom-wl) not working in fedora 13 for Dell 1545

Fedora 13 does have support for Broadcom wireless drivers On Dell laptop, but it didn’t really work out on a friend’s laptop. Finally we got it working and I thought I’ll just note the steps down. Below are the three easy steps you need to take to make it work properly.

 

  1. Install the RPM Fusion repos…

    http://rpmfusion.org/Configuration

  2. Install the akmod version of the 64-bit driver…

    su
    yum install akmod-wl

  3. Reboot or restart NetworkManager and check the panel applet icon for available networks.

Install Google Chrome with YUM on Fedora 16/15, CentOS/Red Hat (RHEL) 6

This howto explains howto install Google Chrome Web browser on Fedora 16Fedora 15Fedora 14Fedora 13Fedora 12CentOS 6 and Red Hat 6 (RHEL 6). Best way to install and keep up-to-date with Google Chrome browser is use Google’s own YUM repository.

Enable Google YUM repository

Add following to /etc/yum.repos.d/google.repo file:
32-bit

[google-chrome]
name=google-chrome - 32-bit
baseurl=http://dl.google.com/linux/chrome/rpm/stable/i386
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

64-bit

[google-chrome]
name=google-chrome - 64-bit
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

Note: Both 32-bit and 64-bit repos can be placed in the same file.

Install Google Chrome with YUM (as root user)

Install Google Chrome Stable Version

## Install Google Chrome Stable version ##
yum install google-chrome-stable

Install Google Chrome Beta Version

## Install Google Chrome Beta version ##
yum install google-chrome-beta

Install Google Chrome Unstable Version

## Install Google Chrome Unstable version ##
yum install google-chrome-unstable