Capturing network traffic with pcap

In the field of computer network administration, pcap (packet capture) consists of an application programming interface (API) for capturing network traffic. Unix-like systems implement pcap in the libpcap library; Windows uses a port of libpcap known as WinPcap.

      Wikipedia: https://en.wikipedia.org/wiki/Pcap

Functions:

  1. pcap_lookupdev
  2. pcap_lookupnet
  3. pcap_open_live
  4. pcap_compile
  5. pcap_setfilter
  6. pcap_loop
  7. pcap_close

https://github.com/shinobu-x/x/tree/master/pcap

  • pcap_basic.cpp
  • pcap_get_addr.cpp

OpenStack Manual Installation - 4th Topic: To Build Glance Component

Notice: This series is to expect to use kernel 3.10.0-229.

1. What Glance provides.

It provides a catalog service for storing and querying virtual disk images. Glance also provides an end-to-end solution for cloud disk image management with Nova and Swift.

2. Architecture.

Grance is composed of three pieces.

  1. glance-api
  2. glance-registry
  3. The Grance database

1. glance-api

 It accepts incoming API requests and then communicates with the other components to facilitate querying, retrieving, uploading, or deleting images. By default, glance-api listens on port 9292.

 The configuration file the image service API is found in the glance-api.conf file.

■ The Glance API calls:

  •  Store Image: POST /images: Stores the image and then returns the metadata created about it.
  • Download Image: GET /images/<id>: Retrieves image specified by <id>.
  • Update Image: PUT /images/<id>: Update image metadata or actual data specified by <id>.
  • Delete image: DELETE /images/<id>: Delete image specified by <id>.
  • List Images: GET /images: Return id, name, disk_format, container_format, checksum, and size fo all images.
  • Image Details: HEAD /images/<id>: Return all metadata for image specified by <id>

2. glance-registry

 Its process stores and retrieves metadata about images.

 The configuration for the image service's registry, which stores the metadata about image, is found in the glance-registry.conf file.

3. The Glance database

 It contains only two tables; Image and Image property. The image table represents the image in the datastore (disk format, container format, size, etc.), while the Image property table contains custom image metadata. While the image representation and image metadata is stored in the database, the actual images are stored in image stores.

 Image stores are the storage places for the virtual disk image and come in a number of different options.

3. Installation

3.1. To install package

 # yum -y install openstack-glance

3.2. To set up default configuration

 # cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.org

 # cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.org

 # cp /usr/share/glance/glance-registry-dist.conf /etc/glance/glance-registry.conf

 # openstack-db --init --service glance --password glance --rootpw glance

 # crudini --set /etc/glance/glance-api.conf paste_deploy flavor keystone

 # crudini --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name services

 # crudini --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance

 # crudini --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance

 # crudini --set /etc/glance/glance-api.conf DEFAULT rabbit_userid rabbitadmin

 # crudini --set /etc/glance/glance-api.conf DEFAULT rabbit_password rabbitpass

 # crudini --set /etc/glance/glance-api.conf DEFAULT rabbit_host <ip address>

 # crudini --set /etc/glance/glance-api.conf DEFAULT rabbit_use_ssl True

 # crudini --set /etc/glance/glance-api.conf DEFAULT rabbit_port 5671

 # crudini --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

 # crudini --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name services

 # crudini --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance

 # crudini --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance

3.3. To start, enable glance-registry, glance-api

 # systemctl start openstack-glance-registry

 # systemctl enable openstack-glance-registry

 # systemctl start openstack-glance-api

 # systemctl enable openstack-glance-api

 # egrep -i 'err|crtical' /var/log/glance/*

3.4. To register the Glance in the Keystone

 # source /root/keystonerc_admin

 # keystone user-create --name glance --pass glance

 # keystone user-role-add --user glance --role admin --tenant services

 # keystone service-create --name glance --type image --description "OpenStack Image Service"

 # export GLANCE_SERVICE_ID=`keystone service-list | grep glance | awk '{print $2}'`

 # keystone endpoint-create --service-id ${GLANCE_SERVICE_ID} \

 --publicurl http://<ip address>:9292 \

 --adminurl http://<ip address>:9292 \

 --internalurl http://<ip address>:9292

 # firewall-cmd --add-port=9292/tcp --permanent

 # firewall-cmd --reload

3.5. To create image for testing

 # mkdir /images

 # qemu-img create -f qcow2 /images/centos7.1.img 10G

 # virt-install \

 --name Centos7.1 \

 --rame 2048 \

 --disk path=/images/centos7.1 \

 --vcpus=2 \

 --os-type linux \

 --os-variant=rhel7 \

 --graphics none \

 --console pty,target_type=serial \

 --location='http://ftp.iij.ad.jp/pub/linux/centos/7.1.1503/os/x86_64/' \

 --extra-args='console=ttyS0,115200n8 serial'

 # glance image-create \

 --name test \

 --is-public true \

 --disk-format qcow2 \

 --container-format bare \

  < /images/centos7.1.img

 # glance image-lis

Aside Reversibility - Ruby

This was quite interesting to me.

irb(main):001:0> me1 = "puts 'Life with Linux'"
=> "Life with Linux"

 

irb(main):002:0> puts me1
puts 'Life with Linux'
=> nil

 

irb(main):003:0> eval me1
Life with Linux
=> nil

 

irb(main):004:0> bytes_in_binary1 = me1.bytes.map { |byte| byte.to_s(2).rjust(8, '0') }
=> ["01001100", "01101001", "01100110", "01100101", "00100000", "01110111", "01101001", "01110100", "01101000", "00100000", "01001100", "01101001", "01101110", "01110101", "01111000"]

 

irb(main):005:0> num = bytes_in_binary1.join.to_i(2)
=> 396752326826842305216365844501525880

 

irb(main):006:0> puts num
396752326826842305216365844501525880
=> nil

 

irb(main):007:0> bytes_in_binary2 = num.to_s(2).scan(/.+?(?=.{8}*\z)/)
=> ["1001100", "01101001", "01100110", "01100101", "00100000", "01110111", "01101001", "01110100", "01101000", "00100000", "01001100", "01101001", "01101110", "01110101", "01111000"]

 

irb(main):008:0> me2 = bytes_in_binary2.map { |string| string.to_i(2).chr }.join
=> "Life with Linux"

 

irb(main):009:0> puts me2
puts 'Life with Linux'
=> nil

 

irb(main):010:0> eval me2
Life with Linux
=> nil

 

 

 

 

OpenStack Manual Installation - 3rd Topic: To Build Swift Component

Notice: This series is to expect to use kernel 3.10.0-229.

1. What Swift provides is

 A massively scalable and redundant object store conceptually similar to Amazon S3.

  S3: Simple Storage Service

 To provide this scalability and redundancy, it writes multiple copies of each object to multiple storage servers within separate ZONEs.

2. What ZONEs means is

 They are a logical grouping of storage servers that have been isolated from each other to guard against failure.

 Swift is configurable in terms of how many copies called replicas are written, as well as how many zones are configured. The best practices call for three replicas written across five zones.

3. Architecture of Swift

 The logical view of swift can be divided into two parts:

  1. Presentation

  2. Resource

Presentation

 Swift accepts end user requests via swift-proxy processes. It accepts incoming end user requests, and authorizes and authenticates them optionally, then passes them on to the appropriate object, account, or container processes for completion. It can optionally work with a cache like memcached to reduce authentication, container and account calls. It also accepts requests via the OpenStack API on port 80. There is also an optional middleware to support the Amazon S3.

Authentication

Swift handles authentication through a three steps process.

  1. User authenticates through the authentication system (or middleware within swift-proxy, and receives a unique token which is an operator-cusomizable string. This step is only required if the user does not process a valid token. Tokens are valid for an operator-configurable time limit.
  2. User issues a second request to Swift, directly to swift-proxy, passing the token along with the request in the HTTP headers
  3. swift-proxy validates the token and responds to user request with the help of swift-account, swift-container, and / or swift-object.

Resource

 Swift manages a number of information resources through three process that fulfill requests from swift-proxy.

  1. swift-account: It manages a sqlite3 database of accounts defined with the object storage service.
  2. swift-container: It manages another sqlite3 database, but contains a mapping of containers, analogous to buckets in S3.
  3. swift-object: It's a mapping of actual objects stored on the storage node.

4. Installation

4.1. To install packages

 # yum -y install \

openstack-swift-proxy \

openstack-swift-object \

openstack-swift-container \

openstack-swift-account \

python-swiftclient \

memcached

4.2. To configure keystone
 # source /root/keystonerc_admin && \

keystone user-create --name swift --pass swift && \

keystone tenant-create --name swift && \

keystone role-create --name swift
 # keystone user-role-add --role swift --tenant swift --user swift && \

 # keystone service-create \

--name swift \

--type object-store \

--description "Swift Storage Service"
 # export SWIFT_SERVICE_ID=`keystone service-list | grep swift | awk '{print $2}'` && echo ${SWIFT_SERVICE_ID}
 # echo "keystone endpoint-create \

--service-id ${SWIFT_SERVICE_ID} \

--publicurl \"http://<bind ip>:8080/v1/AUTH_%(tenant_id)s\" \

--adminurl \"http://<bind ip>:8080/v1/AUTH_%(tenant_id)s\" \

--internalurl \"http://<bind ip>:8080/v1/AUTH_%(tenant_id)s\"

4.3. To prepare block devices to export

 # fdisk -l && mkdir -p /srv/node/z{1,2}d1 && \

echo "/dev/sdc1 /srv/node/z1d1 xfs defaults 0 0" >> /etc/fstab && \

echo "/dev/sdd1 /srv/node/z2d1 xfs defaults 0 0" >> /etc/fstab && \

mount -a && \

chown -R swift:swift /node/ && \

restorecon -R /node
4.4. To configure swift
 # cp /etc/swift/swift.conf /etc/swift/swift.conf.org && \

cp /etc/swift/account-server.conf /etc/swift/account-server.conf.org && \

cp /etc/swift/container-server.conf /etc/swift/container-server.conf.org && \

cp /etc/swift/object-server.conf /etc/swift/object-server.conf.org && \

crudini --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix $(openssl rand -hex 10) && \

crudini --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix $(openssl rand -hex 10) && \

crudini --set /etc/swift/account-server.conf DEFAULT bind_ip <ip address to bind> && \

crudini --set /etc/swift/container-server.conf DEFAULT bind_ip <ip address to bind> && \

crudini --set /etc/swift/object-server.conf DEFAULT bind_ip <ip address to bind> && \

 # source /root/keystonerc_admin
 # swift-ring-builder /etc/swift/account.builder create 12 2 1 && \

swift-ring-builder /etc/swift/container.builder create 12 2 1 && \

swift-ring-builder /etc/swift/object.builder create 12 2 1

 The Rings:

 The rings contain information about all the Swift storage partitions and how they are distributed between the different nodes and disks.

 Syntax:

 swift-ring-builder <builder file> create <part power> <replicas> <min part hours>

  <part power>: Number of partitions

  <replicas>: Number of replication per object

  <min part hours>: Number of hours to restrict moving a partition more than once.
 # for i in 1 2; \

do swift-ring-builder \

/etc/swift/account.builder add z${i}-<bind ip>:6002/z${i}d1 100 ; \

done

 # for i in 1 2; \

do swift-ring-builder \

/etc/swift/container.builder add z${i}-<bind ip>:6001/z${i}d1 100 ; \

done && \

for i in 1 2; \

do swift-ring-builder \

/etc/swift/object.builder add z${i}-<bind ip>:6000/z${i}d1 100 ; \

done
 # swift-ring-builder /etc/swift/account.builder rebalance && \

swift-ring-builder /etc/swift/container.builder rebalance && \

swift-ring-builder /etc/swift/object.builder rebalance && ls /etc/swift/*gz

4.5. To enable the service

 # systemctl start openstack-swift-account && \

systemctl start openstack-swift-container && \

systemctl start openstack-swift-object && \

systemctl enable openstack-swift-account && \

systemctl enable openstack-swift-container && \

systemctl enable openstack-swift-object
 # chown -R root:swift /etc/swift && \

tail /var/log/messages && \

cp /etc/swift/proxy-server.conf /etc/swift/proxy-server.conf.org
4.6. To initialize swift-proxy
 # crudini --set /etc/swift/proxy-server.conf \

filter:authtoken admin_tenant_name services && \

crudini --set /etc/swift/proxy-server.conf \

filter:authtoken \

identity_uri http://<bind ip>:35357 && \

crudini --set /etc/swift/proxy-server.conf \

filter:authtoken admin_user swift && \

crudini --set /etc/swift/proxy-server.conf \

filter:authtoken admin_password swift
4.7. To start services
 # systemctl start memcached && \

systemctl start openstack-swift-proxy && \

tail /var/log/messages && \

systemctl enable memcached && \

systemctl enable openstack-swift-proxy

Receiving Flow of Packets in The Linux Kernel

Notice: Descriptions of LAYER in this document are based on the OSI model.

Layer5

Three system calls:

  • write
  • sendto
  • sendmsg

end up in:

  • __sock_sendmsg()

 which does:

  • security_sock_sendmsg()

 to check prmissions, then forwards the message to the next layer using the socket'ssendmsg virtual method.

Layer4

To find an sk_buff with space available to copy the data from user space to the kernel space using:

  • skb_add_data()

The buffer space is pre-allocated for each socket. If there is no available space for the buffer, communication stalls. The data remains in user space until buffer space becomes available again.

The size of allocated sk_buff is equal to the MSS + headroom.

  • MSS: Maximum Segment Size

Segmentation happens at this layer. Whatever ends up in the same sk_buff will become a single TCP segment. Still, the segments can be fragmented further at the next layer.

The TCP queue is  activated. Packets are sent with:

  • tcp_transmit_skb()

It will be called multiple times if there are more active buffers.

tcp_transmit_skb()

builds the TCP header which the allocation of the sk_buff has left space for. It clones the skb to pass control to the network layer. The network layer is called through the queue_xmit virtual function of the socket's address family:

inet_connection_sock->icsk_of_ops

Layer3

  • ip_queue_xmit()

does routing, if it's necessary, then creates the IPv4 header.

  • nf_hook()

is called in several places to perform network filtering. This hook may modify the datagram or discard it.

The routing decision results in destination dst_entry object. This destination model the receiving IP address of the datagram. The dst_entry's output virtual method is called to perform actual output.

The sk_buff is passed on to:

  • ip_output()

It does post-routing filtering, re-outputs it on a new destination if necessary in order of netfiltering, fragments the datagram into packets if necessary, and finally sends it to the output device.

Fragmentation tries to reuse existing fragment buffers if possible. This happens when forwarding an already fragmented incoming IP packet. The fragment buffers are special sk_buff objects, pointing in the same data space; no copy required.

If no fragment buffers are available, new sk_buff objects with new data space are allocated, and the data is copied.

It's worth noting that TCP already makes sure that the packets are smaller than MTU, so normally fragmentation is not required.

  • MTU: Maximum Transmission Unit

Device-specific output is again through a virtual method call, to output of the dst_entry's neighbour data structure. This is usually:

  • dev_queue_xmit

There is some optimization for packets with a known destination:

hh_cache

Layer2

The main function of the kernel at this level is that scheduling the packets to be sent out. For this purpose, Linux uses queueing descipline, struct qdisc, abstraction.

  • dev_queue_xmit

puts the sk_buff on the device queue using the qdisc->enqueue virtual method.

if necessary, the data is linearized into the sk_buff. This requires copying.

Devices which don't have:

  • qdisc

go directly to:

  • dev_hard_start_xmit()

Several qdisc scheduling policies exist. The basic and most used one is:

  • pfifo_fast

which has three priorities.

The device output queue is immediately triggered with:

  • qdisc_run()

It calls:

  • qdisc_restart()

which takes an skb from the queue using the qdisc->dequeue virtual method. Specific queueing disciplines may delay sending by not returning any skb, and setting up instead:

  • qdisc_watchdog_timer()

When the timer expires:

  • netif_schedule()

is called to start transmission.

Eventually, the sk_buff is sent with:

  • dev_hard_start_xmit()

and removed from qdisc. If sending fails, the skb is re-queued.

  • netif_schedule()

is called to re-schedule and it raises a software interrupt, which causes:

  • net_tx_action()

 to be called when:

  • NET_TX_SOFTIRQ

is ran by ksoftirqd.

  • net_tx_action()

and

  • qdisc_run()

for each device with an active queue.

  • dev_hard_start_xmit()

calls the hard_start_xmit virtual method for the net_device. But first, it calls:

  • dev_queue_xmit_nit()

which checks if a packet handler has been registered for:

  • ETH_P_ALL

protocol. This used for tcpdump.

The device driver's hard_start_xmit function will generate one or more commands to the network device for scheduling transfer of the buffer. After a while, the network device replies that it's done. This triggers freeing of the sk_buff. If the sk_buff is freed from interrupt context:

  • dev_kfree_skb_irq()

is used. This delays the actual freeing until the next:

  • NET_TX_SOFTIRQ

runs, by putting the skb on the softnet_data completion_queue. This avoids doing frees from interrupt context.

 

PDF is available at:

https://goo.gl/NKlbEk

OpenStack Manual Installation - 2nd Topic: To Build Keystone Component

Notice: This series is to expect to use kernel 3.10.0-229.

Keystone, what does it do , and how it works.

1. Introduction

 Keystone provides identity and access policy services for all components in the OpenStack family. It implements it's own REST based API called Identity API.

 It provides authentication and authorization for all components of the OpenStack including (but not limited to) Swift, Glance, Nova.

 Authentication verifies that a request actually comes from who it says it does.

 Authorization is verifying whether the authenticated user has access to the services who is requesting for.

2. Authentication

 Keystone provides two ways of authentication.

 One is username/password based and the other is token based. Apart from that, Keystone provides the following services:

 1. Token Service

  To carry authorization information about an authenticated user.

 2. Catalog Service

  To contains a list of available servcies at the users' disposal.

 3. Policy Service

  To let's Keystone manage access to specific services by specific users or groups.

3. Components of Identity Service

 Endpoints:

  Every the OpenStack service (Nova, Swift, Glance) runs on a dedicated port and on a dedicated URL called Endpoints.

 One of the functionalities of Keystone is to provide the users with service catalog. This service allows users to refer to endpoint url of the other services. Using the method:

keystoneclient.v2_0.client.Client.service_catalog.url_for

users can refer to endpoint of the service.

Options to use API:

 OS_USERNAME: Your Keystone username.

 OS_PASSWORD: Your Keystone password.

 OS_TENANT_NAME: Name of Keystone tenant.

 OS_AUTH_URL: The OpenStack API server URL.

 OS_IDENTITY_API_VERSION: The OpenStack identity API version.

 OS_CACERT: The location for the CA truststore (PEM formatted) for the client.

 OS_CERT: The location for the keystore (PEM formatted) containing the public key of the client. This keystore can also optionally the private key of this client.

 OS_KEY: The location for the keystore (PEM formatted) containing the private key of the client. This value can be empty if the private key is included in the OS_CERT file.

 Regions:

  A region defines a dedicated physical location inside a data centre. In a typical cloud setup, most if not all services are distributed across data centers/servers called Regions.

 User:

  Keystone authenticates user.

 Services:

  Each component that is being connected to or administered via Keystone, called Service.

 Role:

  In order to maintain restrictions as to what a particular user can do inside cloud infrastructure which is important to have a role associated.

 Tenant:

  Tenant is a project with all the service endpoint and a role associated to user who is member of that particular tenant.

4. Installation

 1. To set repository

  # export HOSTNAME=`hostname`

  # rpm -ivh \

https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

  # yum clean all

  # yum makecache

  # yum whatprovides openstack-keystone

openstack-keystone-2014.1-0.3.b2.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1-0.4.b3.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1-0.7.rc1.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1-0.8.rc2.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1-0.9.rc2.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1-2.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1.1-1.el7.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1.2.1-1.el7.centos.noarch : OpenStack Identity Service
Repo : openstack-icehouse
openstack-keystone-2014.1.4-1.el7.centos.noarch : OpenStack Identity Service
Repo : openstack-icehouse

 2. To intall Keystone

  # yum -y install openstack-keystone openstack-selinux openstack-utils

  # openstack-db --init --service keystone

  # keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

  # export SERVICE_TOKEN=$(openssl rand -hex 10)

  # export SERVICE_ENDPOINT=http://${HOSTNAME}:35357/v2.0

  # echo ${SERVICE_TOKEN} > /root/ks_admin_token

Admin token

 Before you can use the REST API, you need to define an authorization token which is configured in keystone.conf under the section [DEFAULT].

 A "shared secret" that can be used to bootstrap Keystone. This token does not represent a user, and carries no explicit authorization.

  # crudini --set /etc/keystone/keystone.conf DEFAULT admin_token ${SERVICE_TOKEN}

 3. To start, enable Keystone

  # systemctl start openstack-keystone

  # systemctl enable openstack-keystone

  # systemctl enable mariadb.service

  # firewall-cmd --add-port=35357/tcp --permanent

  # firewall-cmd --add-port=5000/tcp --permanent

  # firewall-cmd --reload

  # ps -ajxf | grep keystone-all

  # egrep -i 'err|fail' /var/log/keystone/keystone.log

  # keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"

  # export KEYSTONE_SERVICE_ID=`keystone service-list | grep keystone | awk '{print $2}'` && echo ${KEYSTONE_SERVICE_ID}

  # keystone user-create --name keystone --pass keystone

  # keystone role-create --name keystone

  # keystone tenant-create --name keystone

  # keystone user-role-add --user keystone --role keystone --tenant keystone

  # cat >> /root/keystonerc << EOF
  export OS_USERNAME=keystone
  export OS_TENANT_NAME=keystone
  export OS_PASSWORD=redhat
  export OS_AUTH_URL=http://${HOSTNAME}:35357/v2.0/
  export PS1='[\u@\h \W(keystone)]\$ '
  EOF

  # source ~/keystonerc

  # keystone user-list

 

I will consider security for the OpenStack infrasturcture including networking part more later.