Tuesday, November 22, 2011

TM_6795 error message : Session or its instance is invalidated and integration service is configured not to run impacted session.

TM_6795 error message : Session or its instance is invalidated and integration service is configured not to run impacted session.


If you getting the below error, please follow the below steps.
1) Checkout your Mapping and WorkFlow and fetch the Mapping in your session.
2) Just do a simple change in your mapping and refresh the session.
3) Save the session and run the WorkFlow again


Or

If you still getting the same error after the above steps then please diconnect the repositry and try the above steps.

Thursday, October 13, 2011

About EXT3 File System


Typically, file systems are located inside of a disk partition. The partition is usually organized into 512-byte sectors. When the partition is formatted as Ext3, consecutive sectors will be grouped into blocks, whose size can range from 1,024 to 4,096 bytes. The blocks are grouped together into block groups, whose size will be tens of thousands of blocks. Each file has data stored in three major locations: blocks, inodes, and directory entries. The file content is stored in blocks, which are allocated for the exclusive use of the file. A file is allocated as many blocks as it needs. Ideally, the file will be allocated consecutive blocks, but this is not always possible.
The metadata for the file is stored in an inode structure, which is located in an inode table at the beginning of a block group. There are a finite number of inodes and each is assigned to a block group. File metadata includes the temporal data such as the last modified, last accessed, last changed, and deleted times. Metadata also includes the file size, user ID, group ID, permissions, and block addresses where the file content is stored.
The addresses of the first 12 blocks are saved in the inode and additional addresses are stored externally in blocks, called indirect blocks. If the file requires many blocks and not all of the addresses can fit into one indirect block, a double indirect block is used whose address is given in the inode. The double indirect block contains addresses of single indirect blocks, which contain addresses of blocks with file content. There is also a triple indirect address in the inode that adds one more layer of pointers.
Last, the file's name is stored in a directory entry structure, which is located in a block allocated to the file's parent directory. An Ext3 directory is similar to a file and its blocks contain a list of directory entry structures, each containing the name of a file and the inode address where the file metadata is stored. When you use the ls -i command, you can see the inode address that corresponds to each file name. We can see the relationship between the directory entry, the inode, and the blocks in Figure 1.
When a new file is created, the operating system (OS) gets to choose which blocks and inode it will allocate for the file. Linux will try to allocate the blocks and inode in the same block group as its parent directory. This causes files in the same directory to be close together. Later we'll use this fact to restrict where we search for deleted data.
The Ext3 file system has a journal that records updates to the file system metadata before the update occurs. In case of a system crash, the OS reads the journal and will either reprocess or roll back the transactions in the journal so that recovery will be faster then examining each metadata structure, which is the old and slow way. Example metadata structures include the directory entries that store file names and inodes that store file metadata. The journal contains the full block that is being updated, not just the value being changed. When a new file is created, the journal should contain the updated version of the blocks containing the directory entry and the inode.

Deletion Process
Several things occur when an Ext3 file is deleted from Linux. Keep in mind that the OS gets to choose exactly what occurs when a file is deleted and this article assumes a general Linux system.
At a minimum, the OS must mark each of the blocks, the inode, and the directory entry as unallocated so that later files can use them. This minimal approach is what occurred several years ago with the Ext2 file system. In this case, the recovery process was relatively simple because the inode still contained the block addresses for the file content and tools such as debugfs and e2undel could easily re-create the file. This worked as long as the blocks had not been allocated to a new file and the original content was not overwritten.
With Ext3, there is an additional step that makes recovery much more difficult. When the blocks are unallocated, the file size and block addresses in the inode are cleared; therefore we can no longer determine where the file content was located. We can see the relationship between the directory entry, the inode, and the blocks of an unallocated file in Figure 2.
Recovery Approaches
Now that we know the components involved with files and which ones are cleared during deletion, we can examine two approaches to file recovery (besides using a backup). The first approach uses the application type of the deleted file and the second approach uses data in the journal. Regardless of the approach, you should stop using the file system because you could create a file that overwrites the data you are trying to recover. You can power the system off and put the drive in another Linux computer as a slave drive or boot from a Linux CD.
The first step for both techniques is to determine the deleted file's inode address. This can be determined from debugfs or The Sleuth Kit (TSK). I'll give the debugfs method here. debugfs comes with most Linux distributions and is a file system debugger. To start debugfs, you'll need to know the device name for the partition that contains the deleted file. In my example, I have booted from a CD and the file is located on /dev/hda5:
# debugfs /dev/hda5
debugfs 1.37 (21-Mar-2005)
debugfs:
We can then use the cd command to change to the directory of the deleted file:
debugfs: cd /home/carrier/
The ls -d command will list the allocated and deleted files in the directory. Remember that the directory entry structure stores the name and the inode of the file and this listing will give us both values because neither is cleared during the deletion process. The deleted files have their inode address surrounded by "<" and ">":
debugfs: ls -d
415848 (12) . 376097 (12) .. 415864 (16) .bashrc
[...]
<415926> (28) oops.dat

The above details are from http://linux.sys-con.com/node/117909

Tuesday, September 13, 2011

Clearing caches in Linux

Kernels 2.6.16 and newer provide a mechanism to have the kernel drop the page cache and/or inode and dentry caches on command, which can help free up a lot of memory. Now you can throw away that script that allocated a ton of memory just to get rid of the cache...

To use /proc/sys/vm/drop_caches, just echo a number to it.

To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches

This is a non-destructive operation and will only free things that are completely unused. Dirty objects will continue to be in use until written out to disk and are not freeable. If you run "sync" first to flush them out to disk, these drop operations will tend to free more memory.


Friday, August 19, 2011

SSH port forwarding

SSH port forwarding allows you to establish a secure SSH session and then tunnel arbitrary TCP connections through it.

The syntax is: ssh -L localport:remotehost:remoteport remotehostip

[root@Desktop]# ssh -L 5280:localhost:5280 192.168.0.38
Password: *******

It should come to remote server shell, leave it as is.

You can avoid logging in Server by using -N option with ssh command.

Thursday, August 18, 2011

tourvedios2

Install aborted by user Installation aborted while installing R1soft

When installing the 2.x agent the installation fails with the following:

root@box:/usr/src# ./linux-agent-64-1.78.0-generic.run
Verifying archive integrity... All good.
Uncompressing Righteous Backup Linux Agent..........................................................................................................




Install aborted by user

Installation aborted
Cause

Debian has switched to a new terminal database included in the ncurses-base package. This terminal database is required for the CDP installer.

http://packages.debian.org/squeeze/ncurses-base

Resolution

Installing the ncurses-term package and setting the new TERM variable will allow the CDP installer to run properly.

#apt-get -y install ncurses-term
#export TERM=xterm1
#apt-get -y install linux-headers-$(uname -r)
If you do not have rsync installed on your system, please install it now in order to have the prerequisites necessary for this walkthrough:

#apt-get -y install rsync
The following commands will allow you to run the installer. For this example, the installer has been placed in /usr/src.

root@box:/usr/src# ./linux-agent-64-1.78.0-generic.run
For more information on installing the CDP agent, please refer to:

http://wiki.r1soft.com/display/R1D/Installing+the+Linux+Agent



R1soft installation in Debian server

If you are getting the below errors while installing R1soft in Debian servers

=====================
oot@box~# /usr/bin/r1soft-cki
Checking for binary module
..
No binary module found
Gathering kernel information
Gathering kernel information complete.
Creating kernel headers package
Checking '/usr/src/linux-headers-2.6.30-1-common' for kernel headers
Found headers in '/usr/src/linux-headers-2.6.30-1-common'
Compressing...
uploading kernel package 100% 3863KB 3.8MB/s 00:01
Starting module build...
............................gathering required information...
sending request for kernel module...
kernel module installer failed. (0):
chroot chroot make
make[1]: Entering directory `/'
~~~~~~
make: Entering an unknown directory
make: *** /usr/src/linux-headers-2.6.30-1-common: No such file or directory. Stop.
make: Leaving an unknown directory
make[4]: *** [all] Error 2
~~~~~~
=====================

This issue is known to affect Debian, Suse, and other distros using separate architecture-specific module directories in their header packages.
Thanks to Chris at Interspire.com for working closely with us to discover a resolution.

The Debian developers have removed their common/architecture specific symlinks for the kernel headers in 2.6.29 and higher,
and in the process, have broken a whole heap of kernel module building, including the R1Soft CDPAgent module
(refer to here: http://bugs.debian.org/cgi-bin/bugrepaort.cgi?bug=521515)

Basically, there are now two kernel module directories, both of which contain necessary files:

drwxr-xr-x 4 root root 4096 2010-01-20 05:43 linux-headers-2.6.32-trunk-amd64
drwxr-xr-x 4 root root 4096 2010-01-20 05:47 linux-headers-2.6.32-trunk-common

Resolution
Copying the contents of these two directories into a temporary directory, with the proper makefile chosen,
will allow the r1soft-cki process to compile a module successfully.

cd /usr/src/
/bin/cp -ra linux-headers-2.6.32-3-amd64/ /usr/src/r1build
/bin/cp -ra linux-headers-2.6.32-3-common/* /usr/src/r1build/
Now, point the r1soft-cki utility to use your temporary directory, with the following flags added to the command.

CDP2

# /usr/bin/r1soft-cki --get-module --kernel-dir /usr/src/r1build
CDP3

# /usr/bin/r1soft-setup --get-module --kernel-dir /usr/src/r1build

After a successful build, you can delete the temporary directory, start the agent, and enjoy Continuous Data Protection!

rm -r /usr/src/r1build
buagentctl start

Saturday, August 13, 2011

Adding SSH keys

If you need an automatic login from host A to Host B please follow the below steps.

1) SSH into server A
2) Execute the below command
ssh-keygen -t rsa (dont give any input just hit enter for all questions, the private key and public will stored in the default location)
3) Copy the public key (/root/.ssh/id_rsa.pub) and save in the following file of remote host server B

.ssh/authorized_keys2
and change the ownership of this file to 640

Tuesday, August 9, 2011

IPTABLES

First check whether the IP in add in your IP tables.

iptables -nL | grep

If the IP is listed in your iptable rules then please delete it using the below commands.

iptables -L INPUT -n --line-numbers (List the rules with line numbers)
iptables -D INPUT <>

Save the IP tables >>> /etc/init.d/iptables save
Restart the IP tables >>>> /etc/init.d/iptables restart


Other Features

Deny access to a specific IP address
iptables -I FORWARD -d 123.123.123.123 -j DROP

Deny access to a specific Subnet
iptables -I FORWARD -s 192.168.2.0/255.255.255.0 -j DROP

Deny access to a specific IP address range with Logging
iptables -I FORWARD -m iprange --src-range 192.168.1.10-192.168.1.13 -j logdrop

Deny access to a specific Outbound IP address with logging
iptables -I OUTPUT -d 239.255.255.250 -j logdrop

Block SMTP traffic except to specified hosts
/usr/sbin/iptables -I FORWARD 1 -p tcp -d safe.server1.com --dport 25 -j logaccept
/usr/sbin/iptables -I FORWARD 2 -p tcp -d safe.server2.com --dport 25 -j logaccept
/usr/sbin/iptables -I FORWARD 3 -p tcp --dport 25 -j logdrop


Block outgoing SMTP traffic except from specified hosts
iptables -I FORWARD 1 -p tcp -s 192.168.1.2 --dport 25 -j ACCEPT
iptables -I FORWARD 2 -p tcp -s 192.168.1.1/24 --dport 25 -j REJECT


Allow HTTP traffic only to specific domain(s)
iptables -I FORWARD 1 -p tcp -d dd-wrt.com --dport 80 -j ACCEPT
iptables -I FORWARD 2 -p tcp --dport 80 -j DROP


Block all traffic except HTTP HTTPS and FTP
iptables -I FORWARD 1 -p tcp -m multiport --dports 21,80,443 -j ACCEPT
iptables -I FORWARD 2 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -I FORWARD 3 -j DROP


Port Forwarding to a specific LAN IP
Port Forwarding can be accomplished from within the web interface here. However, the very same thing can be done a bit differently (tested and working), via command line. --u3gyxap: Example with port 443 and IP 192.168.1.2
iptables -t nat -I PREROUTING -p tcp -d $(nvram get wan_ipaddr) --dport 443 -j DNAT --to 192.168.1.2:443
iptables -I FORWARD -p tcp -d 192.168.1.2 --dport 443 -j ACCEPT

If you want to restrict the source IP (a question that is asked a lot on the forums), add -s 123.45.67.89 to one of your rules (replacing the IP address with the real one of course).
iptables -t nat -I PREROUTING -p tcp -s 123.45.67.89 -d $(nvram get wan_ipaddr) --dport 443 -j DNAT --to 192.168.1.2:443
iptables -I FORWARD -p tcp -d 192.168.1.2 --dport 443 -j ACCEPT

This should make it so only one IP address is able to access your forwarded port from the Internet.
In order for me to get this to work (v.24) I needed to put the "-s 123.45.67.89" in the "iptables -I FORWARD" command also - When it was in the PREROUTING command only I was still able to access the internal resource from any IP address!

Sunday, August 7, 2011

Redirecting a URL to a different port with and without conditions

Redirecting a URL and using a specific port is a question that got my head scratching one day. Someone had a login
page for example login.html. To enhance security they later decided they would set up another server that listened on
a nonstandard port (8080) and move the login page to there. To implement this they needed to employ URL and port
redirection. This is how the port redirection can be done:

--------------------------------------------------------------------------------------------------------
RewriteEngine On
RewriteCond %{THE_REQUEST} ^[a-z]{3,9}\ /login\.html\ HTTP/ [NC]
RewriteRule ^.*login\.html$ http://secure1.example.com:8080/ [R=301,L]
--------------------------------------------------------------------------------------------------------
For use in .htaccess and if it is not set globally or for the root directory of your domain, be sure to set
Options +Indexes +FollowSymLinks as needed before the RewriteEngine On directive.
Also, depending on your server configuration you may need to use RewriteBase. Typical usage is RewriteBase /
placed just after the RewriteEngine On directive. Further details on RewriteBase are provided in a previous section.

This example is very similar to the How to redirect your home page example above except here RewriteRule and
RewriteCond match \login.html. Note that the RewriteCond insures that the target of the GET is for login.html from
only the root directory of the domain. If such a strict interpretation is not required you can remove the RewriteCond
statement. The port redirection itself is specified by the :8080 in the second argument to RewriteRule.

TIP!
You can even get more creative by modifying the RewriteCond to use HTTP_USER_AGENT in place of THE_REQUEST,
use negation on the second argument and then specify the regex for say msnbot,Slurp or Googlebot. This would cause
redirection to occur except if a search bot was requesting. This is useful because bots can't login so this would
be a method to provide crawalable content that otherwise would not get indexed.

Friday, August 5, 2011

Configure remote Database connection for Fantastico and Softlocus

Fantastico

Go to /usr/local/cpanel/3rdparty/fantastico/include

touch mysqlconfig.local.php ; chmod 755 mysqlconfig.local.php

add the below lines in the newly created file




Softlocus

Go to "remote Mysql server setup", there add the remote DB IP and select radio button "password" and enter the root password of your DB server then save.

Friday, May 27, 2011

exim dead but subsys locked

You can fix this issue by removing the eximdisable file from /etc/eximdisable

Sunday, May 15, 2011

FFMPEG and its scripts

This is a tutorial to enable video sharing support on Centos servers.

This should install ffmpeg, mplayer, mencoder, flvtool2, yamdi, x264, theora, mp3lame, vorbis, ogg, faac, faad2, xvid, mediainfo, mp4box, neroaacenc . These tools will enable on your server:

video and audio conversion
thumbnail generation
FLV meta injection (flvtool2, yamdi)
extra codecs (x264, theora, mp3lame, vorbis, ogg, faac, faad2, xvid)
This is functional and we update it each time we configure a new server.
Installation is done using the “root” account.


Attention: If you copy and paste commands below, make sure “-” are not converted to “.”. If these get converted, edit “.” back to “-”. Some options use 2 * “-”.

Some prerequisites:

yum install gcc gcc-c++ automake autoconf libtool yasm git subversion
yum install zlib-devel libmad-devel libvorbis-devel libtheora-devel lame-devel faac-devel a52dec-devel xvidcore-devel freetype-devel
yum install libogg zlib-devel libtool

rpm -ivh http://rpm.livna.org/livna-release.rpm
yum install yasm
yum install libogg libogg-devel libvorbis libvorbis-devel

The quick way to setup ffmpeg, mplayer, mencoder:

rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.5.1-1.el5.rf.i386.rpm

or if you have 64bit server

rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm

yum -y install ffmpeg ffmpeg-devel mplayer mencoder

Edit the /etc/ld.so.conf file and add the following lines:

/usr/local/lib
/usr/lib

GIT
(required to get X264)

yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel

cd /usr/local/src
wget http://www.kernel.org/pub/software/scm/git/git-1.6.0.4.tar.gz
tar -zxvf git-1.6.0.4.tar.gz
cd git-1.6.0.4
make prefix=/usr/local all
make prefix=/usr/local/ install
git –version
And git manpages:
cd /usr/local/src
wget http://www.kernel.org/pub/software/scm/git/git-manpages-1.6.0.4.tar.gz
cd /usr/local/share/man
tar -zxvf /usr/local/src/git-manpages-1.6.0.4.tar.gz
YASM

YASM is a modular assembler, it is required by the x264 package.

cd /usr/local/src/
wget http://www.tortall.net/projects/yasm/releases/yasm-0.7.0.tar.gz
tar zfvx yasm-0.7.0.tar.gz
cd yasm-0.7.0
./configure
make && make install
cd ..

X264

cd /usr/local/src/
git clone git://git.videolan.org/x264.git
cd /usr/local/src/x264
./configure –enable-shared –prefix=/usr
make && make install
ls -s /usr/local/lib/libx264.so /usr/lib/libx264.so

Essential Codecs
cd /usr/local/src/
wget http://www.mplayerhq.hu/MPlayer/releases/codecs/essential-20071007.tar.bz2
tar xjvf essential-20071007.tar.bz2
mkdir /usr/local/lib/codecs/
mv essential-20071007/ /usr/local/lib/codecs/
chmod -R 755 /usr/local/lib/codecs/
Or all codecs:

cd /usr/local/src/
wget http://www.mplayerhq.hu/MPlayer/releases/codecs/all-20100303.tar.bz2
tar xjvf all-20100303.tar.bz2

mkdir /usr/local/lib/codecs/
mv all-20100303 /usr/local/lib/codecs/

LAME

cd /usr/local/src/

wget http://downloads.sourceforge.net/project/lame/lame/3.98.4/lame-3.98.4.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Flame%2Ffiles%2F&ts=1285175656&use_mirror=switch
tar zxvf lame-3.98.4.tar.gz
cd /usr/local/src/lame-3.98.4
./configure
make && make install

OGG

cd /usr/local/src/
wget downloads.xiph.org/releases/ogg/libogg-1.1.3.tar.gz
tar zxvf libogg-1.1.3.tar.gz
cd /usr/local/src/libogg-1.1.3
./configure –enable-shared && make && make install
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
export PKG_CONFIG_PATH

VORBIS

cd /usr/local/src/
wget downloads.xiph.org/releases/vorbis/libvorbis-1.1.2.tar.gz
tar zxvf libvorbis-1.1.2.tar.gz
cd /usr/local/src/libvorbis-1.1.2
./configure && make && make install

Theora
cd /usr/local/src/
wget http://downloads.xiph.org/releases/theora/libtheora-1.1.1.tar.bz2
tar jxvf libtheora-1.1.1.tar.bz2
cd /usr/local/src/libtheora-1.1.1
./configure –prefix=/usr --enable-shared
make && make install
ls -s /usr/local/lib/libtheora.so /usr/lib/libtheora.so

FAAC
cd /usr/local/src/
wget http://downloads.sourceforge.net/faac/faac-1.28.tar.gz
tar zxvf faac-1.28.tar.gz
cd /usr/local/src/faac-1.28
./configure –prefix=/usr
make && make install
FAAD2

cd /usr/local/src/
wget http://downloads.sourceforge.net/faac/faad2-2.6.1.tar.gz
tar zxf faad2-2.6.1.tar.gz
cd faad2
autoreconf -vif
./configure –disable-drm –disable-mpeg4ip
make && make install

OpenJPEG
cd /usr/local/src/
wget http://openjpeg.googlecode.com/files/openjpeg_v1_3.tar.gz
tar zxvf openjpeg_v1_3.tar.gz
cd OpenJPEG_v1_3
make && make install
ldconfig

Xvid
cd /usr/local/src/
wget http://downloads.xvid.org/downloads/xvidcore-1.2.1.tar.gz
tar zxfv xvidcore-1.2.1.tar.gz
cd /usr/local/src/xvidcore/build/generic
./configure --enable-shared
make && make install
ls -s /usr/local/lib/libxvidcore.so.4.2 /usr/lib/libxvidcore.so.4.2
Before installing ffmpeg, setup some linking for scripts that look in certain locations for codecs:

ln -s /usr/local/lib/libavformat.so.50 /usr/lib/libavformat.so.50
ln -s /usr/local/lib/libavcodec.so.51 /usr/lib/libavcodec.so.51
ln -s /usr/local/lib/libavutil.so.49 /usr/lib/libavutil.so.49
ln -s /usr/local/lib/libmp3lame.so.0 /usr/lib/libmp3lame.so.0
ln -s /usr/local/lib/libavformat.so.51 /usr/lib/libavformat.so.51
ln -s /usr/local/lib/libavdevice.so.52 /usr/lib/libavdevice.so.52

ln -s /usr/lib/libtheora.so.0.3.10 /usr/local/lib/libtheora.so.0.3.10
ln -s /usr/lib/libx264.so.80 /usr/local/lib/libx264.so.80
ln -s /usr/lib/libtheora.so.0.3.10 /usr/local/lib/libtheora.so
ln -s /usr/lib/libx264.so.80 /usr/local/lib/libx264.so

FFMPEG (download latest from SVN)

export TMPDIR=$HOME/tmp
export LD_LIBRARY_PATH=/usr/local/lib/

cd /usr/local/src/
svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg
cd /usr/local/src/ffmpeg/

./configure --enable-libfaac --enable-shared --enable-memalign-hack --enable-gpl --enable-libtheora --enable-libmp3lame --enable-libopenjpeg --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-avfilter --enable-swscale
make && make install

ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg

FFMPEG configure options use 2 x “-”.

MPLAYER

cd /usr/local/src/
svn checkout svn://svn.mplayerhq.hu/mplayer/trunk mplayer
cd /usr/local/src/mplayer
svn update
cd /usr/local/src/mplayer
./configure && make && make install

ln -s /usr/local/bin/mencoder /usr/bin/mencoder
ln -s /usr/local/bin/mplayer /usr/bin/mplayer

FLVTOOL2
First install Ruby from WHM.

cd /usr/local/src/
wget rubyforge.org/frs/download.php/9225/flvtool2_1.0.5_rc6.tgz
tar zxvf flvtool2_1.0.5_rc6.tgz
cd /usr/local/src/flvtool2_1.0.5_rc6/
ruby setup.rb config
ruby setup.rb setup
ruby setup.rb install

YAMDI

cd /usr/local/src/
wget http://downloads.sourceforge.net/project/yamdi/yamdi/1.4/yamdi-1.4.tar.gz?use_mirror=ufpr
tar zxf yamdi-1.4.tar.gz
cd yamdi-1.4
gcc yamdi.c -o yamdi -O2 -Wall
mv yamdi /usr/bin/
yamdi -h

INSTALLATION RESULTS

mencoder: /usr/local/bin/mencoder
mplayer: /usr/local/bin/mplayer
yamdi: /usr/bin/yamdi

Add these shortcuts to /usr/bin if you need these there by default:
mencoder to /usr/local/bin/mencoder
mplayer to /usr/local/bin/mplayer

SuPHP fix

With suphp “env -i” is required when executing php scripts.

exec(“env -i /usr/bin/php ” . $cmd. ‘>/dev/null &’);

MediaInfo

http://mediainfo.sourceforge.net/en/Download/CentOS

wget http://downloads.sourceforge.net/zenlib/libzen0-0.4.14-1.i386.CentOS_5.rpm
wget http://downloads.sourceforge.net/zenlib/libzen0-devel-0.4.14-1.i386.CentOS_5.rpm
wget http://downloads.sourceforge.net/mediainfo/libmediainfo0-0.7.32-1.i386.CentOS_5.rpm
wget http://downloads.sourceforge.net/mediainfo/libmediainfo0-devel-0.7.32-1.i386.CentOS_5.rpm
wget http://downloads.sourceforge.net/mediainfo/mediainfo-0.7.32-1.i386.CentOS_5.rpm
rpm -vi libzen0-0.4.14-1.i386.CentOS_5.rpm
rpm -vi libzen0-devel-0.4.14-1.i386.CentOS_5.rpm
rpm -vi libmediainfo0-0.7.32-1.i386.CentOS_5.rpm
rpm -vi libmediainfo0-devel-0.7.32-1.i386.CentOS_5.rpm
rpm -vi mediainfo-0.7.32-1.i386.CentOS_5.rpm

ln -s /usr/bin/mediainfo /usr/local/bin/mediainfo

MP4Box

yum -y install freetype-devel SDL-devel freeglut-devel

wget -c http://mirror.ffmpeginstaller.com/source/gpac/gpac-full-0.4.5.tar.gz

tar -xzf gpac-full-0.4.5.tar.gz
cd gpac

./configure –prefix=/usr/local/cpffmpeg/ –extra-cflags=-I/usr/local/cpffmpeg/include/ –extra-ldflags=-L/usr/local/cpffmpeg/lib –disable-wx –strip

make && make lib && make apps && make install lib && make install

cp bin/gcc/libgpac.so /usr/lib

ln -s /usr/local/cpffmpeg/bin/MP4Box /usr/local/bin/MP4Box
ln -s /usr/local/cpffmpeg/bin/MP4Box /usr/bin/MP4Box

install -m644 bin/gcc/libgpac.so /usr/local/lib/libgpac.so
chmod +x /usr/local/lib/libgpac.so
ldconfig

neroAacEnc

wget ftp://ftp6.nero.com/tools/NeroDigitalAudio.zip
unzip NeroDigitalAudio.zip -d nero
cd nero/linux
sudo install -D -m755 neroAacEnc /usr/local/bin

ln -s /usr/local/bin/neroAacEnc /usr/bin/neroAacEnc

uploadprogress

cd /usr/local/src
wget http://pecl.php.net/get/uploadprogress-1.0.0.tgz
tar -zxvf uploadprogress-1.0.0.tgz
cd uploadprogress-1.0.0
phpize
./configure && make && make install

Edit /usr/lib/php.ini and add:

extension = “uploadprogress.so”

Tuesday, April 26, 2011

Exim : retry time not reached for any host after a long failure period

The issue is because of the corrupted exim db files.

Goto /var/spool/exim/db and delete files: retry , retry.lockfile , wait-remote_smtp, wait-remote_smtp.lockfile

/etc/init.d/exim restart

Friday, April 15, 2011

How to block access to your server from all IP except your

If your server had CSF then do the below step

close off all UDP/TCP ports in csf.conf, then add the IP's you want to allow access to csf.allow and csf.ignore. Just make sure you add the ips BEFORE doing so or you will lock yourself out.

after that run the below command

csf -r

If you are using IPTABLES then
#The below line will DROP all incoming connections.
iptables -P INPUT DROP

#Allow specific IPs to specific ports for example port 22 for IP 1.1.1.1
iptables -A INPUT -p tcp -s 1.1.1.1 --d-port 22 -j ACCEPT

In this fashion you can add your IPs in the allow list.

Thursday, April 14, 2011

CGI files showing 500 internal server in Plesk

If you getting 500 internal server while executing the CGI files then please check the error log of that particular domain

eg: /var/www/vhosts/domain.com/statistics/logs/error_log

1) The error log may be showing issue with suexec policy violation: see suexec log for more details
>> Then please check the suexec log in /etc/httpd/logs/suexec.log their you will get error like below
target uid/gid (10078/505) mismatch with directory (10078/504) or program (10078/505)

Fix: /bin/cp /usr/sbin/psa-suexec /usr/sbin/suexec
OR
cp -arf /usr/local/psa/suexec/psa-suexec /usr/sbin/suexec

restart the Apache and try to load the cgi file.
(Reason: Plesk uses it's own suexec file and it might have been replaced by the original one that comes with standard apache package. )


2) In the error log if you getting this error : Premature end of script headers:

Fix: Make sure that the cgi-bin/ folder has the following permissions and ownership:

drwxr-x--- myuser psaserv cgi-bin

The script itself must be owned by domain FTP user but group must be 'psacln':

-rwxr-xr-x myuser psacln script.cgi

The permission should be 755 for your script.cgi file.

Sunday, March 13, 2011

Changing the System time having cPanel

1. Select the appropriate time zone from the /usr/share/zoneinfo directory. Time zone names are relative to that directory. (eg :- Pacific/Easter)

ls /usr/share/zoneinfo

2. Edit the /etc/sysconfig/clock text file so that it looks like this:

ZONE=”Pacific/Easter”
UTC=true
ARC=false
3. Move the following file: /etc/localtime to back

mv /etc/localtime /etc/localtime_bak

4. Create a new soft link for /etc/localtime.

ln -s /usr/share/zoneinfo/Pacific/Easter /etc/localtime

5. Set the Hardware clock by typing the following command

/sbin/hwclock –systohc

6. Verify the time by typing the command date.

Monday, February 28, 2011

How to redirect non-SSL/HTTP requests for Mailman to SSL/HTTPS

Create a custom .htaccess file for Mailman, as seen below:
Code:

# touch /usr/local/cpanel/3rdparty/mailman/cgi-bin/.htaccess
# chown -vv mailman:mailman /usr/local/cpanel/3rdparty/mailman/cgi-bin/.htaccess


Enter the following contents into the custom .htaccess file:
Code:
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Friday, February 25, 2011

How to change the temp directory of Mysql

Add the below line in your my.cnf file and restart the mysql
tmpdir = /path/to/mysqltempdirectory

Friday, January 21, 2011

Unable to start apache in plesk

If you are unable to start apache in linux plesk server, and if you are getting the following error in error logs;

Unable to open logs.

First you need to verify if ulimit is set.

Try executing the command:

echo $ulimit

If it is blank or a low value, try executing the command:

ulimit -n 65536

This should fix the issue.
Showing: 1-1 of 1
Comments

03 Sep, 2008 | Brijesh - KBModerator
In order to make this change permanent we need to edit /etc/sysctl.conf and add following lines
fs.file-max = 65536

run the following command
# /sbin/sysctl -w fs.file-max=65536

As a general rules whenever you met with an error while restarting apache in a plesk server, please try to rebuild apache configuration files by running
# /usr/local/psa/admin/sbin/websrvmng -a -v