Red Hat, Fedora, Gnome, KDE, MySQL, PostgreSQL, PostGIS, Slony, Zarafa, Scalix, SugarCRM, vtiger, CITADEL,OpenOffice, LibreOffice,Wine, Apache, hadoop, Nginx Drupla, Joomla, Jboss, Wordpress, WebGUI, Tomcat, TiKi WiKi, Wikimedia, SpamAssassin, ClamAV, OpenLDAP, OTRS, RT, Samba, Cyrus, Dovecot, Exim, Postfix, sendmail, Amanda, Bacula, DRBD, Heartbeat, Keepalived, Nagios, Zabbix, Zenoss,
Friday, September 5, 2014
Monday, August 11, 2014
Openstack 5 on CentOS 7
CentOS 7 Install minimal.
Step 1 wget http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm
Step 2 vi /etc/sysconfig/selinux # Set: SELINUX=permissive
Step 3 yum update -y
Step 4 systemctl disable NetworkManager
Step 5 yum install ntp -y
Step 6 systemctl disable firewalld #Openstack dont understand firewalld rules
Step 7 vi /etc/ntp.conf
server 1.in.pool.ntp.org
server 1.asia.pool.ntp.org
server 0.asia.pool.ntp.org
Step 8 vi /etc/hosts
192.168.122.113 guggle.co.in. # Openstack need static ip address.
Step 9 yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
Step 10 yum install -y openstack-packstack
Step 11 packstack --gen-answer-file=/root/answers.txt
Step 12 packstack --answer-file=/root/answers.txt
Welcome to Installer setup utility
Packstack changed given value to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up [ DONE ]
Setting up ssh keys [ DONE ]
Discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Preparing servers [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MySQL manifest entries [ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Cinder Keystone manifest entries [ DONE ]
Adding Cinder manifest entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding Swift Keystone manifest entries [ DONE ]
Adding Swift builder manifest entries [ DONE ]
Adding Swift proxy manifest entries [ DONE ]
Adding Swift storage manifest entries [ DONE ]
Adding Swift common manifest entries [ DONE ]
Adding Provisioning Demo manifest entries [ DONE ]
Adding MongoDB manifest entries [ DONE ]
Adding Ceilometer manifest entries [ DONE ]
Adding Ceilometer Keystone manifest entries [ DONE ]
Adding Nagios server manifest entries [ DONE ]
Adding Nagios host manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Installing Dependencies [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.122.113_prescript.pp
192.168.122.113_prescript.pp: [ DONE ]
Applying 192.168.122.113_amqp.pp
Applying 192.168.122.113_mysql.pp
192.168.122.113_amqp.pp: [ DONE ]
192.168.122.113_mysql.pp: [ DONE ]
Applying 192.168.122.113_keystone.pp
Applying 192.168.122.113_glance.pp
Applying 192.168.122.113_cinder.pp
192.168.122.113_keystone.pp: [ DONE ]
192.168.122.113_cinder.pp: [ DONE ]
192.168.122.113_glance.pp: [ DONE ]
Applying 192.168.122.113_api_nova.pp
192.168.122.113_api_nova.pp: [ DONE ]
Applying 192.168.122.113_nova.pp
192.168.122.113_nova.pp: [ DONE ]
Applying 192.168.122.113_neutron.pp
192.168.122.113_neutron.pp: [ DONE ]
Applying 192.168.122.113_neutron_fwaas.pp
Applying 192.168.122.113_osclient.pp
Applying 192.168.122.113_horizon.pp
192.168.122.113_neutron_fwaas.pp: [ DONE ]
192.168.122.113_osclient.pp: [ DONE ]
192.168.122.113_horizon.pp: [ DONE ]
Applying 192.168.122.113_ring_swift.pp
192.168.122.113_ring_swift.pp: [ DONE ]
Applying 192.168.122.113_swift.pp
Applying 192.168.122.113_provision_demo.pp
192.168.122.113_swift.pp: [ DONE ]
192.168.122.113_provision_demo.pp: [ DONE ]
Applying 192.168.122.113_mongodb.pp
192.168.122.113_mongodb.pp: [ DONE ]
Applying 192.168.122.113_ceilometer.pp
Applying 192.168.122.113_nagios.pp
Applying 192.168.122.113_nagios_nrpe.pp
192.168.122.113_ceilometer.pp: [ DONE ]
192.168.122.113_nagios_nrpe.pp: [ DONE ]
192.168.122.113_nagios.pp: [ DONE ]
Applying 192.168.122.113_postscript.pp
192.168.122.113_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]
**** Installation completed successfully ******
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* Did not create a cinder volume group, one already existed
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.122.113. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.122.113/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://192.168.122.113/nagios username: nagiosadmin, password: 55209fe74e694f3c
* The installation log file is available at: /var/tmp/packstack/20140808-001328-M74eSH/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140808-001328-M74eSH/manifests
Step 1 wget http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm
Step 2 vi /etc/sysconfig/selinux # Set: SELINUX=permissive
Step 3 yum update -y
Step 4 systemctl disable NetworkManager
Step 5 yum install ntp -y
Step 6 systemctl disable firewalld #Openstack dont understand firewalld rules
Step 7 vi /etc/ntp.conf
server 1.in.pool.ntp.org
server 1.asia.pool.ntp.org
server 0.asia.pool.ntp.org
Step 8 vi /etc/hosts
192.168.122.113 guggle.co.in. # Openstack need static ip address.
Step 9 yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
Step 10 yum install -y openstack-packstack
Step 11 packstack --gen-answer-file=/root/answers.txt
Step 12 packstack --answer-file=/root/answers.txt
Welcome to Installer setup utility
Packstack changed given value to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up [ DONE ]
Setting up ssh keys [ DONE ]
Discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Preparing servers [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MySQL manifest entries [ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Cinder Keystone manifest entries [ DONE ]
Adding Cinder manifest entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding Swift Keystone manifest entries [ DONE ]
Adding Swift builder manifest entries [ DONE ]
Adding Swift proxy manifest entries [ DONE ]
Adding Swift storage manifest entries [ DONE ]
Adding Swift common manifest entries [ DONE ]
Adding Provisioning Demo manifest entries [ DONE ]
Adding MongoDB manifest entries [ DONE ]
Adding Ceilometer manifest entries [ DONE ]
Adding Ceilometer Keystone manifest entries [ DONE ]
Adding Nagios server manifest entries [ DONE ]
Adding Nagios host manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Installing Dependencies [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.122.113_prescript.pp
192.168.122.113_prescript.pp: [ DONE ]
Applying 192.168.122.113_amqp.pp
Applying 192.168.122.113_mysql.pp
192.168.122.113_amqp.pp: [ DONE ]
192.168.122.113_mysql.pp: [ DONE ]
Applying 192.168.122.113_keystone.pp
Applying 192.168.122.113_glance.pp
Applying 192.168.122.113_cinder.pp
192.168.122.113_keystone.pp: [ DONE ]
192.168.122.113_cinder.pp: [ DONE ]
192.168.122.113_glance.pp: [ DONE ]
Applying 192.168.122.113_api_nova.pp
192.168.122.113_api_nova.pp: [ DONE ]
Applying 192.168.122.113_nova.pp
192.168.122.113_nova.pp: [ DONE ]
Applying 192.168.122.113_neutron.pp
192.168.122.113_neutron.pp: [ DONE ]
Applying 192.168.122.113_neutron_fwaas.pp
Applying 192.168.122.113_osclient.pp
Applying 192.168.122.113_horizon.pp
192.168.122.113_neutron_fwaas.pp: [ DONE ]
192.168.122.113_osclient.pp: [ DONE ]
192.168.122.113_horizon.pp: [ DONE ]
Applying 192.168.122.113_ring_swift.pp
192.168.122.113_ring_swift.pp: [ DONE ]
Applying 192.168.122.113_swift.pp
Applying 192.168.122.113_provision_demo.pp
192.168.122.113_swift.pp: [ DONE ]
192.168.122.113_provision_demo.pp: [ DONE ]
Applying 192.168.122.113_mongodb.pp
192.168.122.113_mongodb.pp: [ DONE ]
Applying 192.168.122.113_ceilometer.pp
Applying 192.168.122.113_nagios.pp
Applying 192.168.122.113_nagios_nrpe.pp
192.168.122.113_ceilometer.pp: [ DONE ]
192.168.122.113_nagios_nrpe.pp: [ DONE ]
192.168.122.113_nagios.pp: [ DONE ]
Applying 192.168.122.113_postscript.pp
192.168.122.113_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]
**** Installation completed successfully ******
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* Did not create a cinder volume group, one already existed
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.122.113. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.122.113/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://192.168.122.113/nagios username: nagiosadmin, password: 55209fe74e694f3c
* The installation log file is available at: /var/tmp/packstack/20140808-001328-M74eSH/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140808-001328-M74eSH/manifests
Tuesday, July 22, 2014
RHEL 7 /CentOS 7 Boot Single User Mode
- init=/bin/sh at the end of line that starts with linux in Grub2 (In case of VMWare like KVM or VirtualBox use rb.break instead of inti=/bin/sh)
- run mount -o remount,rw /sysroot when the system boots
- run chroot /sysroot
- run passwd
- run touch /.autorelabel
- run exit to leave chroot
- run exit to logout
Sunday, June 29, 2014
Using "weechat" config Gtalk.
Installation weechat on Fedora 20
yum -y install weechat libnotify notify-python wget python-xmpp
Configuration
$weechat-curses
Setup scripts
$ cd ~/.weechat/python/autoload
$ wget http://weechat.org/files/scripts/weeget.py
/python load weeget
/weeget
/weechat install buffers
/set weechat.bar.buffers.position top
/weechat install jabber
/jabber add gtalk rajatjpatel@gmail.com yousetthatpasswdisworkingnow talk.google.com:5223
/jabber connect gtalk
yum -y install weechat libnotify notify-python wget python-xmpp
Configuration
$weechat-curses
Setup scripts
$ cd ~/.weechat/python/autoload
$ wget http://weechat.org/files/scripts/weeget.py
/python load weeget
/weeget
/weechat install buffers
/set weechat.bar.buffers.position top
/weechat install jabber
/jabber add gtalk rajatjpatel@gmail.com yousetthatpasswdisworkingnow talk.google.com:5223
/jabber connect gtalk
Tuesday, May 20, 2014
To install Docker on Fedora
Docker-based container sandbox provides a number of advantages for application deployment environment, such as lightweight isolation, deployment portability, ease of maintenance, etc.
Why docker?
• Smaller than VMs • Improved performance • Secure • Flexible
Not only for the cloud environment, Docker can also be quite useful for end users, especially when you want to test out particular software under a specific Linux environment. You can easily spin up a Docker container for the target environment, install and test the software in it, and then throw away the container once you are done. The whole process from beginning to end is quite efficient, and you can avoid messing up your end system all along.
In this post, I am going to describe how to create and manage Docker containers on Fedora.
To install Docker on Fedora, use the following commands:
# yum install docker-io
# systemctl start docker.service
# systemctl enable docker.service
Basic Usage of Docker
To start a new Docker container, you need to decide which Docker image to use for the container. You can search the official Docker image index which lists publicly available Docker images. The Docker index includes Linux base images managed by Docker team (e.g., Ubuntu, Debian, Fedora, CentOS), as well as user-contributed custom images (e.g., MySQL, Redis, WordPress).
For example, to start a Fedora/Ubuntu container in the interactive mode, run the following command. The last argument '/bin/bash' is to be executed inside a container upon its launch.
docker run -i -t ubuntu /bin/bash or docker pull ubuntu /docker pull fedora
The first time you run the above command, it will download available Ubuntu docker image(s) over networks, and then boot up a Docker container using the image. A Ubuntu container will boot up instantly, and you will see a console prompt inside the container. You can access a full-fledged Ubuntu operating system inside the container sandbox.
list of all containers
docker ps -a
Start container of your choice
docker start [container-id]
Remove container from you local repo
docker rm [container-id]
Running container in order to view or interact with the container
docker attach [container-id]
To remove a container image from the local repository:
docker rmi [image-id]
To search a container image from repositry
docker search fedora or docker search centos
Why docker?
• Smaller than VMs • Improved performance • Secure • Flexible
Not only for the cloud environment, Docker can also be quite useful for end users, especially when you want to test out particular software under a specific Linux environment. You can easily spin up a Docker container for the target environment, install and test the software in it, and then throw away the container once you are done. The whole process from beginning to end is quite efficient, and you can avoid messing up your end system all along.
In this post, I am going to describe how to create and manage Docker containers on Fedora.
To install Docker on Fedora, use the following commands:
# yum install docker-io
# systemctl start docker.service
# systemctl enable docker.service
Basic Usage of Docker
To start a new Docker container, you need to decide which Docker image to use for the container. You can search the official Docker image index which lists publicly available Docker images. The Docker index includes Linux base images managed by Docker team (e.g., Ubuntu, Debian, Fedora, CentOS), as well as user-contributed custom images (e.g., MySQL, Redis, WordPress).
For example, to start a Fedora/Ubuntu container in the interactive mode, run the following command. The last argument '/bin/bash' is to be executed inside a container upon its launch.
docker run -i -t ubuntu /bin/bash or docker pull ubuntu /docker pull fedora
The first time you run the above command, it will download available Ubuntu docker image(s) over networks, and then boot up a Docker container using the image. A Ubuntu container will boot up instantly, and you will see a console prompt inside the container. You can access a full-fledged Ubuntu operating system inside the container sandbox.
list of all containers
docker ps -a
Start container of your choice
docker start [container-id]
Remove container from you local repo
docker rm [container-id]
Running container in order to view or interact with the container
docker attach [container-id]
To remove a container image from the local repository:
docker rmi [image-id]
To search a container image from repositry
docker search fedora or docker search centos
Monday, May 5, 2014
Linux Performance Analysis and Tuning
What is “tuned” ?
Tuning profile delivery mechanism
Red Hat ships tuned profiles that improve performance for many workloads...hopefully yours!
To install tuned:
# yum install tuned -y
Now start the services provided by tuned:
# service tuned start
# chkconfig tuned on
# service ktune start
# chkconfig ktune on
To find the current active profile and state of service:
# tuned-adm active
Current active profile: default
Service tuned: enabled, running
Service ktune: enabled, running
To list all the available profiles:
# tuned-adm list
Available profiles:
- default
- throughput-performance
- laptop-ac-powersave
- spindown-disk
- desktop-powersave
- laptop-battery-powersave
- latency-performance
- server-powersave
- enterprise-storage
Current active profile: default
To switch to a different profile:
# tuned-adm profile spindown-disk
NOTE: spindown-disk is one of the profiles
Each profile has 4 configuration file under /etc/tune-profiles/<profile-name>. If you want to create a profile of your own, simply copy one of the profile directory with a different name, change the config files inside it according to your own requirement and activate it.
# cd /etc/tune-profiles/
# cp -a default myprofile
# cd myprofile
# ls
ktune.sh ktune.sysconfig sysctl.ktune tuned.conf
# tuned-adm list
Available profiles:
- balanced
- desktop
- latency-performance
- powersave
- sap
- throughput-performance
- virtual-guest
- virtual-host
Current active profile: balanced
# tuned-adm profile myprofile
In case if you want to disable all tuning, then run:
# tuned-adm off or #server tuned stop
# tuned-adm profile throughput-performance
# tuned-adm active
Current active profile: throughput-performance
# time taskset -c 0 seq 1 60000000 > /dev/null
real 0m0.689s <--
user 0m0.676s
sys 0m0.012s
# service tuned stop
Redirecting to /bin/systemctl stop tuned.service
# time taskset -c 0 seq 1 60000000 > /dev/null
real 0m0.698s <--
user 0m0.686s
sys 0m0.012s
Above sample from laptop.
Tuning profile delivery mechanism
Red Hat ships tuned profiles that improve performance for many workloads...hopefully yours!
To install tuned:
# yum install tuned -y
Now start the services provided by tuned:
# service tuned start
# chkconfig tuned on
# service ktune start
# chkconfig ktune on
To find the current active profile and state of service:
# tuned-adm active
Current active profile: default
Service tuned: enabled, running
Service ktune: enabled, running
To list all the available profiles:
# tuned-adm list
Available profiles:
- default
- throughput-performance
- laptop-ac-powersave
- spindown-disk
- desktop-powersave
- laptop-battery-powersave
- latency-performance
- server-powersave
- enterprise-storage
Current active profile: default
To switch to a different profile:
# tuned-adm profile spindown-disk
NOTE: spindown-disk is one of the profiles
Each profile has 4 configuration file under /etc/tune-profiles/<profile-name>. If you want to create a profile of your own, simply copy one of the profile directory with a different name, change the config files inside it according to your own requirement and activate it.
# cd /etc/tune-profiles/
# cp -a default myprofile
# cd myprofile
# ls
ktune.sh ktune.sysconfig sysctl.ktune tuned.conf
# tuned-adm list
Available profiles:
- balanced
- desktop
- latency-performance
- powersave
- sap
- throughput-performance
- virtual-guest
- virtual-host
Current active profile: balanced
In case if you want to disable all tuning, then run:
# tuned-adm off or #server tuned stop
# tuned-adm profile throughput-performance
# tuned-adm active
Current active profile: throughput-performance
# time taskset -c 0 seq 1 60000000 > /dev/null
real 0m0.689s <--
user 0m0.676s
sys 0m0.012s
# service tuned stop
Redirecting to /bin/systemctl stop tuned.service
# time taskset -c 0 seq 1 60000000 > /dev/null
real 0m0.698s <--
user 0m0.686s
sys 0m0.012s
Above sample from laptop.
# uname -a
Linux rajat.patel.fc20 3.14.2-200.fc20.x86_64 #1 SMP Mon Apr 28 14:40:57 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Guidelines on hardening Linux server installation
A. Use the least amount of permissions to accomplish the required task.
B. Use the minimum of software tools and packages to implement the required services.
C. Securing your server is a continuous process, not a one-time activity.
1. Always start with a minimal server installation and add packages as they're needed. Reasoning: every piece of software can be a potential vulnerability. There's no need to insert a vulnerability with something you'll not use in the first place.
2. Set up a user account and install sudo. Add user to the sudoers file and configure system to allow root login only on tty1-tty8.
3. Install ssh and reconfigure it to listen on non-standard efemeral port (> 1024). If possible, install port knocking to unlock the new ssh port - do not use default knocking sequence!
4. Configure ssh to disable password authentication and permit only pubkey authentication. Install your public key in authorized_keys of a user account. Always use strong passphrase for your keys and keep private keys as best as you can!
5. If server requires ordinary users to log in onto it, configure PAM to harden password policy, if possible. If a users that logs in do not require full access to command, give him a restricted shell.
6. Install only necessary services for your server. If you can choose, choose services that implement some kind of encryption when accessing server. For example, if your users need some sort of remote file services and they can use both FTP and SCP/SFTP, choose SCP/SFTP. Avoid telnet service if at all possible (but have telnet client as you'll probably need it when troubleshooting tcp connections).
7. Use SELinux or AppArmor if you can. Learn to create custom SELinux policies if needed (some software just won't work with SELinux in enforcing mode).
8. Set up iptables in the most restrictive way. On INPUT chain block all ports except those that your services use on that server. Limit open ports by IP addresses that are permitted to access them, if at all possible.
9. Set rules to the OUTPUT chain as well. Lots of exploits work by establishing connection from compromised server back to the attacker's machine which usually bypass external firewalls. Limiting outgoing traffic can mitigate attacks and render them useless.
10. Implement remote central log server and install some sort of log analyzing software. Check logs frequently and search for unusual patterns.
11. Check your /etc/fstab and add 'nodev,noexec,nosuid' options on filesystems that will not have executables and devices. This is far from bullet-proof protection and it can be thwarted by competent attacker, but can still stop some script kiddies and automated attacks.
12. Use chroot when possible. I know this is also almost trivial to evade, but still, why would I ease the attacker's job?
13. Implement tripwire or similar software if you can keep your file-signature database on some non-volatile media (like CD-ROM).
14. Upgrade and apply patches if at all possible.
15. Run some audit tool, both local (Lynis) and remote (OpenVAS, Nessus) to check if you managed to cover all the bases. Analyze reports made by those apps and apply necessary changes to your system.
B. Use the minimum of software tools and packages to implement the required services.
C. Securing your server is a continuous process, not a one-time activity.
1. Always start with a minimal server installation and add packages as they're needed. Reasoning: every piece of software can be a potential vulnerability. There's no need to insert a vulnerability with something you'll not use in the first place.
2. Set up a user account and install sudo. Add user to the sudoers file and configure system to allow root login only on tty1-tty8.
3. Install ssh and reconfigure it to listen on non-standard efemeral port (> 1024). If possible, install port knocking to unlock the new ssh port - do not use default knocking sequence!
4. Configure ssh to disable password authentication and permit only pubkey authentication. Install your public key in authorized_keys of a user account. Always use strong passphrase for your keys and keep private keys as best as you can!
5. If server requires ordinary users to log in onto it, configure PAM to harden password policy, if possible. If a users that logs in do not require full access to command, give him a restricted shell.
6. Install only necessary services for your server. If you can choose, choose services that implement some kind of encryption when accessing server. For example, if your users need some sort of remote file services and they can use both FTP and SCP/SFTP, choose SCP/SFTP. Avoid telnet service if at all possible (but have telnet client as you'll probably need it when troubleshooting tcp connections).
7. Use SELinux or AppArmor if you can. Learn to create custom SELinux policies if needed (some software just won't work with SELinux in enforcing mode).
8. Set up iptables in the most restrictive way. On INPUT chain block all ports except those that your services use on that server. Limit open ports by IP addresses that are permitted to access them, if at all possible.
9. Set rules to the OUTPUT chain as well. Lots of exploits work by establishing connection from compromised server back to the attacker's machine which usually bypass external firewalls. Limiting outgoing traffic can mitigate attacks and render them useless.
10. Implement remote central log server and install some sort of log analyzing software. Check logs frequently and search for unusual patterns.
11. Check your /etc/fstab and add 'nodev,noexec,nosuid' options on filesystems that will not have executables and devices. This is far from bullet-proof protection and it can be thwarted by competent attacker, but can still stop some script kiddies and automated attacks.
12. Use chroot when possible. I know this is also almost trivial to evade, but still, why would I ease the attacker's job?
13. Implement tripwire or similar software if you can keep your file-signature database on some non-volatile media (like CD-ROM).
14. Upgrade and apply patches if at all possible.
15. Run some audit tool, both local (Lynis) and remote (OpenVAS, Nessus) to check if you managed to cover all the bases. Analyze reports made by those apps and apply necessary changes to your system.
Thursday, April 17, 2014
List of Network Diagnostic Tools
In RHEL/Centos/Fedora
ip -- show / manipulate routing, devices, policy routing and tunnels
ifconfig --configure a network interface
ethtool -- query or control network driver and hardware settings
tcpdump -- dump traffic on a network
wireshark -- Interactively dump and analyze network traffic
netstat -- Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships
ss -- another utility to investigate sockets
dropwatch -- dropwatch aims to consolidate several of those checks into one tool, making it easier for a sysadmin or developer to detect lost packets.
systemtap -- systemtap script translator/driver
nmap -- Network exploration tool and security / port scanner
nc -- arbitrary TCP and UDP connections and listens
ping -- send ICMP ECHO_REQUEST to network hosts
ping6 -- send ICMP ECHO_REQUEST to network hosts
iptables -- administration tool for IPv4 packet filtering and NAT
ip6tables -- administration tool for IPv6 packet filtering and NAT
arp -- Linux ARP kernel module.
arping -- send ARP REQUEST to a neighbour host
tc -- show / manipulate traffic control settings, is used to configure Traffic Control in the Linux
lnstat -- unified linux network statistics
nstat -- rtacct - network statistics tools
traceroute -- print the route packets trace to network host
tracepath -- traces path to a network host discovering MTU along this path
tunctl --create and manage persistent TUN/TAP interfaces
ip -- show / manipulate routing, devices, policy routing and tunnels
ifconfig --configure a network interface
ethtool -- query or control network driver and hardware settings
tcpdump -- dump traffic on a network
wireshark -- Interactively dump and analyze network traffic
netstat -- Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships
ss -- another utility to investigate sockets
dropwatch -- dropwatch aims to consolidate several of those checks into one tool, making it easier for a sysadmin or developer to detect lost packets.
systemtap -- systemtap script translator/driver
nmap -- Network exploration tool and security / port scanner
nc -- arbitrary TCP and UDP connections and listens
ping -- send ICMP ECHO_REQUEST to network hosts
ping6 -- send ICMP ECHO_REQUEST to network hosts
iptables -- administration tool for IPv4 packet filtering and NAT
ip6tables -- administration tool for IPv6 packet filtering and NAT
arp -- Linux ARP kernel module.
arping -- send ARP REQUEST to a neighbour host
tc -- show / manipulate traffic control settings, is used to configure Traffic Control in the Linux
lnstat -- unified linux network statistics
nstat -- rtacct - network statistics tools
traceroute -- print the route packets trace to network host
tracepath -- traces path to a network host discovering MTU along this path
tunctl --create and manage persistent TUN/TAP interfaces
Sunday, February 9, 2014
OpenLDAP server on Fedora 20
Installation
# yum install openldap-servers migrationtools -y
Initial Start
Genrate new passwd for openldap server.
# slappasswd
{SSHA}KMAAqJoh0gUxs8TPZfa2MvZezcp5+O4E
Configuration Files
[root@localhost openldap]# ls -l
total 20
drwxr-xr-x. 2 root root 4096 Feb 7 11:53 certs
-rw-r--r--. 1 root root 121 Jan 13 19:50 check_password.conf
-rw-r--r--. 1 root root 364 Jan 13 19:50 ldap.conf
drwxr-xr-x. 2 root root 4096 Feb 7 11:53 schema
drwx------. 3 ldap ldap 4096 Feb 7 11:53 slapd.d
[root@localhost openldap]# cd slapd.d/
[root@localhost slapd.d]# ls -l
total 8
drwxr-x---. 3 ldap ldap 4096 Feb 7 12:12 cn=config
-rw-------. 1 ldap ldap 589 Feb 7 11:53 cn=config.ldif
[root@localhost slapd.d]# cd cn\=config
[root@localhost cn=config]# ls -l
total 24
drwxr-x---. 2 ldap ldap 4096 Feb 7 11:53 cn=schema
-rw-------. 1 ldap ldap 378 Feb 7 11:53 cn=schema.ldif
-rw-------. 1 ldap ldap 513 Feb 7 11:53 olcDatabase={0}config.ldif
-rw-------. 1 ldap ldap 408 Feb 7 11:53 olcDatabase={-1}frontend.ldif
-rw-------. 1 ldap ldap 562 Feb 7 11:53 olcDatabase={1}monitor.ldif
-rw-------. 1 ldap ldap 609 Feb 7 11:53 olcDatabase={2}hdb.ldif
Fedora configuration for Openldap server.
[root@localhost cn=config]# cat olcDatabase\=\{2\}hdb.ldif
1 # AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
2 # CRC32 2792fd93
3 dn: olcDatabase={2}hdb
4 objectClass: olcDatabaseConfig
5 objectClass: olcHdbConfig
6 olcDatabase: {2}hdb
7 olcDbDirectory: /var/lib/ldap
8 olcSuffix: dc=my-domain,dc=com
9 olcRootDN: cn=Manager,dc=my-domain,dc=com
10 olcDbIndex: objectClass eq,pres
11 olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub
12 structuralObjectClass: olcHdbConfig
13 entryUUID: 147058b0-240c-1033-97b1-2b95e7519548
14 creatorsName: cn=config
15 createTimestamp: 20140207062318Z
16 entryCSN: 20140207062318.835797Z#000000#000#000000
17 modifiersName: cn=config
18 modifyTimestamp: 20140207062318Z
[root@localhost cn=config]# cat olcDatabase\=\{1\}monitor.ldif
1 # AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
2 # CRC32 98c50304
3 dn: olcDatabase={1}monitor
4 objectClass: olcDatabaseConfig
5 olcDatabase: {1}monitor
6 olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external
7 ,cn=auth" read by dn.base="cn=Manager,dc=my-domain,dc=com" read by * none
8 structuralObjectClass: olcDatabaseConfig
9 entryUUID: 147053f6-240c-1033-97b0-2b95e7519548
10 creatorsName: cn=config
11 createTimestamp: 20140207062318Z
12 entryCSN: 20140207062318.835676Z#000000#000#000000
13 modifiersName: cn=config
14 modifyTimestamp: 20140207062318Z
# cd /usr/share/migrationtools/
# vi migrate_common.ph
48 $NAMINGCONTEXT{'group'} = "ou=Group" (need to “s” in “ou=Groups”)
70 # Default DNS domain
71 $DEFAULT_MAIL_DOMAIN = "tomjerry.and";
72
73 # Default base
74 $DEFAULT_BASE = "dc=tomjerry,dc=and";
90 $EXTENDED_SCHEMA = 1; ## "0" edit to "1"##
Config FQDN for openldap.
# ./migrate_base.pl
dn: dc=tomjerry,dc=and
dc: tomjerry
objectClass: top
objectClass: domain
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Mounts,dc=tomjerry,dc=and
ou: Mounts
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Rpc,dc=tomjerry,dc=and
ou: Rpc
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=People,dc=tomjerry,dc=and
ou: People
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Hosts,dc=tomjerry,dc=and
ou: Hosts
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: nisMapName=netgroup.byuser,dc=tomjerry,dc=and
nismapname: netgroup.byuser
objectClass: top
objectClass: nisMap
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Netgroup,dc=tomjerry,dc=and
ou: Netgroup
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Networks,dc=tomjerry,dc=and
ou: Networks
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Services,dc=tomjerry,dc=and
ou: Services
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: nisMapName=netgroup.byhost,dc=tomjerry,dc=and
nismapname: netgroup.byhost
objectClass: top
objectClass: nisMap
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Aliases,dc=tomjerry,dc=and
ou: Aliases
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Group,dc=tomjerry,dc=and
ou: Group
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Protocols,dc=tomjerry,dc=and
ou: Protocols
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
To gentrate base.ldif
# ./migrate_base.pl > /root/base.ldif
Add user to system and ldap server
# mkdir /home/ldappeople
# useradd -d /home/ldappeople/ldappeople1 ldappeople1
# useradd -d /home/ldappeople/ldappeople2 ldappeople2
# useradd -d /home/ldappeople/ldappeople3 ldappeople3
# useradd -d /home/ldappeople/ldappeople4 ldappeople4
# useradd -d /home/ldappeople/ldappeople5 ldappeople5
#passwd ldappeople1
#passwd ldappeople2
#passwd ldappeople3
#passwd ldappeople4
#passwd ldappeople5
# getent passwd
# getent passwd | tail -n 5 > /root/users
#getent shadow
#getent shadow | tail –n 5 > /root/passwords
#getent group |tail -n 5
#getent group |tail -n 5 > /root/groups
# vi /usr/share/migrationtools/migrate_passwd.pl
186 sub read_shadow_file
187 {
188 open(SHADOW, "/root/passwords") || return; ## add your path exmple :/root/passwords ##
189 while(<SHADOW>) {
190 chop;
191 ($shadowUser) = split(/:/, $_);
192 $shadowUsers{$shadowUser} = $_;
193 }
194 close(SHADOW);
195 }
Migrate all user and they password to ldap.
# ls -l /root/
total 24
-rw-------. 1 root root 980 Feb 7 11:39 anaconda-ks.cfg
-rw-r--r--. 1 root root 2088 Feb 10 10:16 base.ldif
-rw-r--r--. 1 root root 100 Feb 10 10:24 groups
-rw-r--r--. 1 root root 713 Feb 7 14:30 ldap.sh
-rw-r--r--. 1 root root 650 Feb 10 10:24 passwords
-rw-r--r--. 1 root root 320 Feb 10 10:23 users
# ./migrate_passwd.pl /root/users
dn: uid=ldappeople1,ou=People,dc=tomjerry,dc=and
uid: ldappeople1
cn: ldappeople1
sn: ldappeople1
mail: ldappeople1@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$yX9pRsgh$73UTx9iUPGhzGmTRw0C2c2SueSLcgTpV.6xlWuUrJdiZsCdV0b2er.kynPcgyqT/4VtJYLSu/fYKakeHC/2az1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/ldappeople/ldappeople1
dn: uid=ldappeople2,ou=People,dc=tomjerry,dc=and
uid: ldappeople2
cn: ldappeople2
sn: ldappeople2
mail: ldappeople2@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$jHCMjc3c$kX5.rv15RUh3FFRpb5WPuHo/w2Lz.CA1fV9u7Mv0C921yKl6BNuRSW2yuyRZnzFkgqSuz7zFfRPaH8CZbpqx.1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1002
gidNumber: 1002
homeDirectory: /home/ldappeople/ldappeople2
dn: uid=ldappeople3,ou=People,dc=tomjerry,dc=and
uid: ldappeople3
cn: ldappeople3
sn: ldappeople3
mail: ldappeople3@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$OJjfVqpu$mziIRrTz0ZD1LYQsul5ELhEAaps2aX/d5oV62OlOexaVtu0hD1zp8ChYcdKCu1qn4E/5hiLo5ubNE4ytWy8tF0
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1003
gidNumber: 1003
homeDirectory: /home/ldappeople/ldappeople3
dn: uid=ldappeople4,ou=People,dc=tomjerry,dc=and
uid: ldappeople4
cn: ldappeople4
sn: ldappeople4
mail: ldappeople4@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$bOo7/SBC$yjDSZCCoaLfJwemlW3Cwh84EJNVLmTImYubHnfzfrpG7ROBV66PTcWorZ1EUdxNRZVM5izY2sMJ3VQXgfcy9J1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1004
gidNumber: 1004
homeDirectory: /home/ldappeople/ldappeople4
dn: uid=ldappeople5,ou=People,dc=tomjerry,dc=and
uid: ldappeople5
cn: ldappeople5
sn: ldappeople5
mail: ldappeople5@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$ONFG4JSW$WR.eRoK6oO2lwuAVTYDVSAwfaEIyd3EKVRL7//9J80dk6XkkooFY73oCf0JDkEZ1f9wib3/VaXotwmgaoZd6h1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1005
gidNumber: 1005
homeDirectory: /home/ldappeople/ldappeople5
# ./migrate_passwd.pl /root/users > /root/users.ldif
# ./migrate_group.pl /root/groups
dn: cn=ldappeople1,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople1
userPassword: {crypt}x
gidNumber: 1001
dn: cn=ldappeople2,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople2
userPassword: {crypt}x
gidNumber: 1002
dn: cn=ldappeople3,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople3
userPassword: {crypt}x
gidNumber: 1003
dn: cn=ldappeople4,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople4
userPassword: {crypt}x
gidNumber: 1004
dn: cn=ldappeople5,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople5
userPassword: {crypt}x
gidNumber: 1005
# ./migrate_group.pl /root/groups > /root/groups.ldif
# ls -ltr /root/
total 32
-rw-------. 1 root root 980 Feb 7 11:39 anaconda-ks.cfg
-rw-r--r--. 1 root root 713 Feb 7 14:30 ldap.sh
-rw-r--r--. 1 root root 2088 Feb 10 10:16 base.ldif
-rw-r--r--. 1 root root 320 Feb 10 10:23 users
-rw-r--r--. 1 root root 650 Feb 10 10:24 passwords
-rw-r--r--. 1 root root 100 Feb 10 10:24 groups
-rw-r--r--. 1 root root 2785 Feb 10 10:34 users.ldif
-rw-r--r--. 1 root root 720 Feb 10 10:35 groups.ldif
Now time to upload ldif file to LDAP Server
# slaptest -u
52f86020 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif"
52f86020 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb.ldif"
config file testing succeeded
#systemctl restart slapd.service
# systemctl status slapd.service
slapd.service - OpenLDAP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/slapd.service; enabled)
Active: active (running) since Mon 2014-02-10 10:48:59 IST; 5min ago
Process: 799 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=0/SUCCESS)
Process: 771 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)
Main PID: 801 (slapd)
CGroup: /system.slice/slapd.service
└─801 /usr/sbin/slapd -u ldap -h ldapi:/// ldap:///
Feb 10 10:53:10 localhost.localdomain slapd[801]: conn=1002 op=2 UNBIND
Feb 10 10:53:10 localhost.localdomain slapd[801]: conn=1002 fd=11 closed
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 fd=11 ACCEPT from IP=[::1]:47273 (IP=[::]:389)
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=0 BIND dn="cn=Manager,dc=tomjerry,dc=and" method=128
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=0 BIND dn="cn=Manager,dc=tomjerry,dc=and" mech=SIMPLE ssf=0
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=0 RESULT tag=97 err=0 text=
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=1 ADD dn="cn=ldappeople1,ou=Group,dc=tomjerry,dc=and"
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=1 RESULT tag=105 err=21 text=objectClass: value #0 invalid per syntax
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=2 UNBIND
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 fd=11 closed
#ldapadd -x -W -D "cn=Manager,dc=tomjerry,dc=and" -f /root/base.ldif
#ldapadd -x -W -D "cn=Manager,dc=tomjerry,dc=and" -f /root/users.ldif
#ldapadd -x -W -D "cn=Manager,dc=tomjerry,dc=and" -f /root/groups.ldif
#ldapsearch -x -b “dc=tomjerry,dc=and” |less
Network Ports
# lsof -i -n -P | grep -i slapd
slapd 496 ldap 8u IPv4 32324 0t0 TCP *:389 (LISTEN)
slapd 496 ldap 9u IPv6 32325 0t0 TCP *:389 (LISTEN)
# yum install openldap-servers migrationtools -y
Initial Start
Genrate new passwd for openldap server.
# slappasswd
{SSHA}KMAAqJoh0gUxs8TPZfa2MvZezcp5+O4E
Configuration Files
[root@localhost openldap]# ls -l
total 20
drwxr-xr-x. 2 root root 4096 Feb 7 11:53 certs
-rw-r--r--. 1 root root 121 Jan 13 19:50 check_password.conf
-rw-r--r--. 1 root root 364 Jan 13 19:50 ldap.conf
drwxr-xr-x. 2 root root 4096 Feb 7 11:53 schema
drwx------. 3 ldap ldap 4096 Feb 7 11:53 slapd.d
[root@localhost openldap]# cd slapd.d/
[root@localhost slapd.d]# ls -l
total 8
drwxr-x---. 3 ldap ldap 4096 Feb 7 12:12 cn=config
-rw-------. 1 ldap ldap 589 Feb 7 11:53 cn=config.ldif
[root@localhost slapd.d]# cd cn\=config
[root@localhost cn=config]# ls -l
total 24
drwxr-x---. 2 ldap ldap 4096 Feb 7 11:53 cn=schema
-rw-------. 1 ldap ldap 378 Feb 7 11:53 cn=schema.ldif
-rw-------. 1 ldap ldap 513 Feb 7 11:53 olcDatabase={0}config.ldif
-rw-------. 1 ldap ldap 408 Feb 7 11:53 olcDatabase={-1}frontend.ldif
-rw-------. 1 ldap ldap 562 Feb 7 11:53 olcDatabase={1}monitor.ldif
-rw-------. 1 ldap ldap 609 Feb 7 11:53 olcDatabase={2}hdb.ldif
Fedora configuration for Openldap server.
[root@localhost cn=config]# cat olcDatabase\=\{2\}hdb.ldif
1 # AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
2 # CRC32 2792fd93
3 dn: olcDatabase={2}hdb
4 objectClass: olcDatabaseConfig
5 objectClass: olcHdbConfig
6 olcDatabase: {2}hdb
7 olcDbDirectory: /var/lib/ldap
8 olcSuffix: dc=my-domain,dc=com
9 olcRootDN: cn=Manager,dc=my-domain,dc=com
10 olcDbIndex: objectClass eq,pres
11 olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub
12 structuralObjectClass: olcHdbConfig
13 entryUUID: 147058b0-240c-1033-97b1-2b95e7519548
14 creatorsName: cn=config
15 createTimestamp: 20140207062318Z
16 entryCSN: 20140207062318.835797Z#000000#000#000000
17 modifiersName: cn=config
18 modifyTimestamp: 20140207062318Z
[root@localhost cn=config]# cat olcDatabase\=\{1\}monitor.ldif
1 # AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
2 # CRC32 98c50304
3 dn: olcDatabase={1}monitor
4 objectClass: olcDatabaseConfig
5 olcDatabase: {1}monitor
6 olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external
7 ,cn=auth" read by dn.base="cn=Manager,dc=my-domain,dc=com" read by * none
8 structuralObjectClass: olcDatabaseConfig
9 entryUUID: 147053f6-240c-1033-97b0-2b95e7519548
10 creatorsName: cn=config
11 createTimestamp: 20140207062318Z
12 entryCSN: 20140207062318.835676Z#000000#000#000000
13 modifiersName: cn=config
14 modifyTimestamp: 20140207062318Z
# cd /usr/share/migrationtools/
# vi migrate_common.ph
48 $NAMINGCONTEXT{'group'} = "ou=Group" (need to “s” in “ou=Groups”)
70 # Default DNS domain
71 $DEFAULT_MAIL_DOMAIN = "tomjerry.and";
72
73 # Default base
74 $DEFAULT_BASE = "dc=tomjerry,dc=and";
90 $EXTENDED_SCHEMA = 1; ## "0" edit to "1"##
Config FQDN for openldap.
# ./migrate_base.pl
dn: dc=tomjerry,dc=and
dc: tomjerry
objectClass: top
objectClass: domain
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Mounts,dc=tomjerry,dc=and
ou: Mounts
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Rpc,dc=tomjerry,dc=and
ou: Rpc
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=People,dc=tomjerry,dc=and
ou: People
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Hosts,dc=tomjerry,dc=and
ou: Hosts
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: nisMapName=netgroup.byuser,dc=tomjerry,dc=and
nismapname: netgroup.byuser
objectClass: top
objectClass: nisMap
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Netgroup,dc=tomjerry,dc=and
ou: Netgroup
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Networks,dc=tomjerry,dc=and
ou: Networks
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Services,dc=tomjerry,dc=and
ou: Services
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: nisMapName=netgroup.byhost,dc=tomjerry,dc=and
nismapname: netgroup.byhost
objectClass: top
objectClass: nisMap
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Aliases,dc=tomjerry,dc=and
ou: Aliases
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Group,dc=tomjerry,dc=and
ou: Group
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
dn: ou=Protocols,dc=tomjerry,dc=and
ou: Protocols
objectClass: top
objectClass: organizationalUnit
objectClass: domainRelatedObject
associatedDomain: tomjerry.and
To gentrate base.ldif
# ./migrate_base.pl > /root/base.ldif
Add user to system and ldap server
# mkdir /home/ldappeople
# useradd -d /home/ldappeople/ldappeople1 ldappeople1
# useradd -d /home/ldappeople/ldappeople2 ldappeople2
# useradd -d /home/ldappeople/ldappeople3 ldappeople3
# useradd -d /home/ldappeople/ldappeople4 ldappeople4
# useradd -d /home/ldappeople/ldappeople5 ldappeople5
#passwd ldappeople1
#passwd ldappeople2
#passwd ldappeople3
#passwd ldappeople4
#passwd ldappeople5
# getent passwd
# getent passwd | tail -n 5 > /root/users
#getent shadow
#getent shadow | tail –n 5 > /root/passwords
#getent group |tail -n 5
#getent group |tail -n 5 > /root/groups
# openssl req -new -x509 -nodes -out
/etc/pki/tls/certs/tomjerry.pem -keyout
/etc/pki/tls/certs/tomjerrykey.pem -days 365
Generating a 2048 bit RSA private key
...+++
..................................................+++
writing new private key to
'/etc/pki/tls/certs/tomjerry.pem'
-----
You are about to be asked to enter
information that will be incorporated
into your certificate request.
What you are about to enter is what is
called a Distinguished Name or a DN.
There are quite a few fields but you
can leave some blank
For some fields there will be a default
value,
If you enter '.', the field will be
left blank.
-----
Country Name (2 letter code) [XX]:IN
State or Province Name (full name)
[]:MAHA
Locality Name (eg, city) [Default
City]:MUMBAI
Organization Name (eg, company)
[Default Company Ltd]:TOMJERRY. INC,
Organizational Unit Name (eg, section)
[]:IT
Common Name (eg, your name or your
server's hostname) []:tomjerry.and
Email Address []:rajat@tomjerry.and
# chown -R root:ldap
/etc/pki/tls/certs/yeswedeal*
# cp -rvf /etc/pki/tls/certs/yeswedeal*
/var/ftp/pub/
`/etc/pki/tls/certs/yeswedealkey.pem'
-> `/var/ftp/pub/yeswedealkey.pem'
`/etc/pki/tls/certs/yeswedeal.pem' ->
`/var/ftp/pub/yeswedeal.pem'
# ln -s /var/ftp/pub/ /var/www/html/
# vi /usr/share/migrationtools/migrate_passwd.pl
186 sub read_shadow_file
187 {
188 open(SHADOW, "/root/passwords") || return; ## add your path exmple :/root/passwords ##
189 while(<SHADOW>) {
190 chop;
191 ($shadowUser) = split(/:/, $_);
192 $shadowUsers{$shadowUser} = $_;
193 }
194 close(SHADOW);
195 }
Migrate all user and they password to ldap.
# ls -l /root/
total 24
-rw-------. 1 root root 980 Feb 7 11:39 anaconda-ks.cfg
-rw-r--r--. 1 root root 2088 Feb 10 10:16 base.ldif
-rw-r--r--. 1 root root 100 Feb 10 10:24 groups
-rw-r--r--. 1 root root 713 Feb 7 14:30 ldap.sh
-rw-r--r--. 1 root root 650 Feb 10 10:24 passwords
-rw-r--r--. 1 root root 320 Feb 10 10:23 users
# ./migrate_passwd.pl /root/users
dn: uid=ldappeople1,ou=People,dc=tomjerry,dc=and
uid: ldappeople1
cn: ldappeople1
sn: ldappeople1
mail: ldappeople1@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$yX9pRsgh$73UTx9iUPGhzGmTRw0C2c2SueSLcgTpV.6xlWuUrJdiZsCdV0b2er.kynPcgyqT/4VtJYLSu/fYKakeHC/2az1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/ldappeople/ldappeople1
dn: uid=ldappeople2,ou=People,dc=tomjerry,dc=and
uid: ldappeople2
cn: ldappeople2
sn: ldappeople2
mail: ldappeople2@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$jHCMjc3c$kX5.rv15RUh3FFRpb5WPuHo/w2Lz.CA1fV9u7Mv0C921yKl6BNuRSW2yuyRZnzFkgqSuz7zFfRPaH8CZbpqx.1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1002
gidNumber: 1002
homeDirectory: /home/ldappeople/ldappeople2
dn: uid=ldappeople3,ou=People,dc=tomjerry,dc=and
uid: ldappeople3
cn: ldappeople3
sn: ldappeople3
mail: ldappeople3@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$OJjfVqpu$mziIRrTz0ZD1LYQsul5ELhEAaps2aX/d5oV62OlOexaVtu0hD1zp8ChYcdKCu1qn4E/5hiLo5ubNE4ytWy8tF0
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1003
gidNumber: 1003
homeDirectory: /home/ldappeople/ldappeople3
dn: uid=ldappeople4,ou=People,dc=tomjerry,dc=and
uid: ldappeople4
cn: ldappeople4
sn: ldappeople4
mail: ldappeople4@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$bOo7/SBC$yjDSZCCoaLfJwemlW3Cwh84EJNVLmTImYubHnfzfrpG7ROBV66PTcWorZ1EUdxNRZVM5izY2sMJ3VQXgfcy9J1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1004
gidNumber: 1004
homeDirectory: /home/ldappeople/ldappeople4
dn: uid=ldappeople5,ou=People,dc=tomjerry,dc=and
uid: ldappeople5
cn: ldappeople5
sn: ldappeople5
mail: ldappeople5@tomjerry.and
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$6$ONFG4JSW$WR.eRoK6oO2lwuAVTYDVSAwfaEIyd3EKVRL7//9J80dk6XkkooFY73oCf0JDkEZ1f9wib3/VaXotwmgaoZd6h1
shadowLastChange: 16111
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 1005
gidNumber: 1005
homeDirectory: /home/ldappeople/ldappeople5
# ./migrate_passwd.pl /root/users > /root/users.ldif
# ./migrate_group.pl /root/groups
dn: cn=ldappeople1,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople1
userPassword: {crypt}x
gidNumber: 1001
dn: cn=ldappeople2,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople2
userPassword: {crypt}x
gidNumber: 1002
dn: cn=ldappeople3,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople3
userPassword: {crypt}x
gidNumber: 1003
dn: cn=ldappeople4,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople4
userPassword: {crypt}x
gidNumber: 1004
dn: cn=ldappeople5,ou=Group,dc=tomjerry,dc=and
objectClass: posixGroup
objectClass: top
cn: ldappeople5
userPassword: {crypt}x
gidNumber: 1005
# ./migrate_group.pl /root/groups > /root/groups.ldif
# ls -ltr /root/
total 32
-rw-------. 1 root root 980 Feb 7 11:39 anaconda-ks.cfg
-rw-r--r--. 1 root root 713 Feb 7 14:30 ldap.sh
-rw-r--r--. 1 root root 2088 Feb 10 10:16 base.ldif
-rw-r--r--. 1 root root 320 Feb 10 10:23 users
-rw-r--r--. 1 root root 650 Feb 10 10:24 passwords
-rw-r--r--. 1 root root 100 Feb 10 10:24 groups
-rw-r--r--. 1 root root 2785 Feb 10 10:34 users.ldif
-rw-r--r--. 1 root root 720 Feb 10 10:35 groups.ldif
Now time to upload ldif file to LDAP Server
# slaptest -u
52f86020 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif"
52f86020 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb.ldif"
config file testing succeeded
#systemctl restart slapd.service
# systemctl status slapd.service
slapd.service - OpenLDAP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/slapd.service; enabled)
Active: active (running) since Mon 2014-02-10 10:48:59 IST; 5min ago
Process: 799 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=0/SUCCESS)
Process: 771 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)
Main PID: 801 (slapd)
CGroup: /system.slice/slapd.service
└─801 /usr/sbin/slapd -u ldap -h ldapi:/// ldap:///
Feb 10 10:53:10 localhost.localdomain slapd[801]: conn=1002 op=2 UNBIND
Feb 10 10:53:10 localhost.localdomain slapd[801]: conn=1002 fd=11 closed
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 fd=11 ACCEPT from IP=[::1]:47273 (IP=[::]:389)
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=0 BIND dn="cn=Manager,dc=tomjerry,dc=and" method=128
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=0 BIND dn="cn=Manager,dc=tomjerry,dc=and" mech=SIMPLE ssf=0
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=0 RESULT tag=97 err=0 text=
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=1 ADD dn="cn=ldappeople1,ou=Group,dc=tomjerry,dc=and"
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=1 RESULT tag=105 err=21 text=objectClass: value #0 invalid per syntax
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 op=2 UNBIND
Feb 10 10:53:27 localhost.localdomain slapd[801]: conn=1003 fd=11 closed
#ldapadd -x -W -D "cn=Manager,dc=tomjerry,dc=and" -f /root/base.ldif
#ldapadd -x -W -D "cn=Manager,dc=tomjerry,dc=and" -f /root/users.ldif
#ldapadd -x -W -D "cn=Manager,dc=tomjerry,dc=and" -f /root/groups.ldif
#ldapsearch -x -b “dc=tomjerry,dc=and” |less
Network Ports
# lsof -i -n -P | grep -i slapd
slapd 496 ldap 8u IPv4 32324 0t0 TCP *:389 (LISTEN)
slapd 496 ldap 9u IPv6 32325 0t0 TCP *:389 (LISTEN)
On the Client system.
# ping tomjeery.and
#vi /etc/hosts
Add LDAP server ipaddress & fqdn
and save
#getent passwd ldappeople1
# su – ldappeople1
su: user ldappeople1 does not exist
#authconfig-tui or authconfig-gtk (any
of tool to config ldap Clint)
choose --LDAP
LDAP Server Base DN dc=tomjerry,dc=and
LDAP Server ldap://ldap.tomjerry.and
Use TLS to Encrypt connections
Download CA
Certificate----http://ldap.tomjerry.and/pub/tomjerry.pem
getent passwd ldappeople1
Thursday, January 2, 2014
Linux Kdump
Why Kdump
Kdump is utility which really helps in determine the cause of kernel crashes, system hangs, or system reboots.
About Kdump
Kdump is the Linux kernel crash-dump mechanism. Redhat recommends that to enable the Kdump feature. In the event of a system crash, Kdump creates a memory image (vmcore) that can help in determining the cause of the crash. Enabling Kdump requires to reserve a portion of system memory for exclusive use by Kdump. This memory is unavailable for other uses.
Kdump uses kexec to boot into a second kernel whenever the system crashes. kexec is a fast-boot mechanism which allows a Linux kernel to boot from inside the context of a kernel that is already running without passing through the bootloader stage. & the system kernel's memory image is preserved across the reboot and is accessible to the dump-capture kernel.
We can use common commands, such as cp and scp, to copy the memory image to a dump file on the local disk, or across the network to a remote system.
Architectures Support:
Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,and s390x architectures.
How to configure Kdump ?
Prerequisites
For dumping cores to a network target, access to a server over NFS or ssh is required.
Whether dumping locally or to a network target, a device or directory with enough free disk space is needed to hold the core. See the "Sizing Local Dump Targets" section below for more information on how much space will be needed.
For configuring kdump on a system running a Xen kernel, it is required to have a regular kernel of the same version as the running Xen kernel installed on the system. (If the system is 32-bit with more then 4GB of RAM, kernel-pae should be installed alongside kernel-xeninstead of kernel.)
Configuring kdump on the Command Line:
To configure the amount of memory that is reserved for the kdump kernel on x86, AMD64, and Intel 64 architectures, open the /boot/grub/grub.conf file as root and add the crashkernel=<size>M@16M parameter to the list of kernel options
Different way to Configuring the Target Type
Edit the /etc/kdump.conf & make appropriate change as per the production environment.
To change the local directory in which the core dump is to be saved, remove the hash sign (“#”) from the beginning of the #path /var/crash line, and replace the value with a desired directory path. Optionally, if you wish to write the file to a different partition, follow the same procedure with the #ext3 /dev/sda4
To write the dump directly to a device, remove the hash sign (“#”) from the beginning of the #raw /dev/sda5 line, and replace the value with a desired device name. For example:
Kdump is utility which really helps in determine the cause of kernel crashes, system hangs, or system reboots.
About Kdump
Kdump is the Linux kernel crash-dump mechanism. Redhat recommends that to enable the Kdump feature. In the event of a system crash, Kdump creates a memory image (vmcore) that can help in determining the cause of the crash. Enabling Kdump requires to reserve a portion of system memory for exclusive use by Kdump. This memory is unavailable for other uses.
Kdump uses kexec to boot into a second kernel whenever the system crashes. kexec is a fast-boot mechanism which allows a Linux kernel to boot from inside the context of a kernel that is already running without passing through the bootloader stage. & the system kernel's memory image is preserved across the reboot and is accessible to the dump-capture kernel.
We can use common commands, such as cp and scp, to copy the memory image to a dump file on the local disk, or across the network to a remote system.
Architectures Support:
Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,and s390x architectures.
How to configure Kdump ?
Prerequisites
For dumping cores to a network target, access to a server over NFS or ssh is required.
Whether dumping locally or to a network target, a device or directory with enough free disk space is needed to hold the core. See the "Sizing Local Dump Targets" section below for more information on how much space will be needed.
For configuring kdump on a system running a Xen kernel, it is required to have a regular kernel of the same version as the running Xen kernel installed on the system. (If the system is 32-bit with more then 4GB of RAM, kernel-pae should be installed alongside kernel-xeninstead of kernel.)
Configuring kdump on the Command Line:
To configure the amount of memory that is reserved for the kdump kernel on x86, AMD64, and Intel 64 architectures, open the /boot/grub/grub.conf file as root and add the crashkernel=<size>M@16M parameter to the list of kernel options
Different way to Configuring the Target Type
Edit the /etc/kdump.conf & make appropriate change as per the production environment.
To change the local directory in which the core dump is to be saved, remove the hash sign (“#”) from the beginning of the #path /var/crash line, and replace the value with a desired directory path. Optionally, if you wish to write the file to a different partition, follow the same procedure with the #ext3 /dev/sda4
To write the dump directly to a device, remove the hash sign (“#”) from the beginning of the #raw /dev/sda5 line, and replace the value with a desired device name. For example:
To
store the dump to a remote machine using the NFS protocol, remove the
hash sign (“#”)
from the beginning of the #net
my.server.com:/export/tmp line,
and replace the value with a valid hostname and directory path. For
example:
To
store the dump to a remote machine using the SSH protocol, remove the
hash sign (“#”)
from the beginning of the #net
user@my.server.com line,
and replace the value with a valid username and hostname. For
example:
Enabling the Service
To start the kdump daemon at boot time, type the following at a shell prompt as root:
This will enable the service for
runlevels 2, 3, 4, and 5. Similarly, typing chkconfig kdump off will
disable it for all runlevels. To start the service in the current
session, use the following command as root:
Testing the Configuration
To test the configuration, reboot the system with kdump enabled, and as root, make sure that the service is running:
Then type the following commands at a shell prompt as root:
This will force the Linux kernel to crash, and the YYYY-MM-DD-HH:MM/vmcore file will be copied to the location you have selected in the configuration (that is, to /var/crash/ by default).
Analyzing the Core Dump
To analyze the vmcore dump file, you must have the crash and kernel-debuginfo packages installed. To do so, type the following at a shell prompt as root:
~]# yum install --enablerepo=rhel-debuginfo crash kernel-debuginfo
To start the utility, type the command in the following form at a shell prompt:
crash /var/crash/timestamp/vmcore /usr/lib/debug/lib/modules/kernel/vmlinux
Example: Running the crash utility
Example: Displaying the kernel stack trace
To display a status of processes in the system, type the ps command at the interactive prompt. You can use ps pid to display the status of the selected process.
Example : Displaying status of processes in the system
Example: Displaying information about open files of the current context
How do I upload a large dump file to Red Hat Support Services?
In some cases, it might be necessary to send a kernel crash dump file to Red Hat Global Support Services for analysis. However, the dump file can be very large, even after being filtered. Since files larger than 250 MB cannot be uploaded directly through the Red Hat Customer Portal when opening a new support case, an FT P server is provided by Red Hat for uploading large files.
The FTP server's address is dropbox.redhat.com and the files are to be uploaded in the /incom ing/ directory. Your FTP client needs to be set into passive mode; if your firewall does not allow this mode, you may use the origin- dropbox.redhat.com server using active mode.
Make sure that the uploaded files are compressed using a program such as gzip and properly and descriptively named. Using your support case number in the file name is recommended. After successfully uploading all necessary files, provide the engineer in charge of your support case with the exact file name and its SHA1 or MD5 checksum.
Subscribe to:
Posts (Atom)