Wednesday, August 20, 2014

Openstack Icehouse Installation using Devstack

System Requirement

Ubuntu 14.04 LTS
Disk : 20GB
Memory : 4GB
Network : 2


Steps

- Install Linux
- basic Linux upgrade
- Install Git
- Git clone
- cd devsatck
- create localrc
- run stack.sh


Detailed Steps

1. Linux update

sudo apt-get update

sudo rm -rf /var/lib/apt/lists/*

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

sudo rebooting


2. Install devstack

sudo apt-get install git
git clone -b stable/icehouse https://github.com/openstack-dev/devstack.git





3. Configure localrc as below
nano /etc/resolv.conf
    nameserver 8.8.8.8
    nameserver 8.8.4.4

sudo resolvconf -u

cd /devstack

wget -O localrc http://goo.gl/OeOGqL 



4. Add a following line in localrc (Very Important)

GIT_BASE=${GIT_BASE:-https://git.openstack.org}


5. Run stack script

./stack.sh

=============

stack@ubuntu14:~/devstack$ more localrc
DEST=/opt/stack
GIT_BASE=${GIT_BASE:-https://git.openstack.org}

# Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=False
SCREEN_LOGDIR=$DEST/logs/screen

# Credentials
ADMIN_PASSWORD=openstack
MYSQL_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_PASSWORD=openstack
SERVICE_TOKEN=tokentoken

# Github's Branch
GLANCE_BRANCH=stable/icehouse
HORIZON_BRANCH=stable/icehouse
KEYSTONE_BRANCH=stable/icehouse
NOVA_BRANCH=stable/icehouse
NEUTRON_BRANCH=stable/icehouse
HEAT_BRANCH=stable/icehouse
CEILOMETER_BRANCH=stable/icehouse

# Neutron - Networking Service
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron

# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas

# Heat - Orchestration Service
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
HEAT_STANDALONE=True

# Ceilometer - Metering Service (metering + alarming)
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval

stack@ubuntu14:~/devstack$

Tuesday, August 19, 2014

OSPF Review and Terminology


ASBR router
- This router functions as redistributing external routes from some other routing domain to OSPF domain.

ABR router
- This router function as connecting area 0 and other area

Stub Area 
- allow injection of external routes in a limited fashion in stub area
- this area does not allow redistribution

NSSA
- block type 5 and type 4 LSA
- allow type 3 LSA
- Type 7 only exists on NSSA area


What is Type 3
- The Type 3 LSA is originated by ABRs into one area to describe links in another area

What is Type 4
- The Type 4 LSA is originated by an ABR but it is used to describe ASBR in one area to routers in another areas

What is Type 5
- OSPF uses the Type5 external link LSAs to advertise external route originated from an ASBR.


What is Type 7
- The Type 7 that is generated by an NSSA ASBR. The 5 LSAs are not allowed in NSSA areas, so the NSSA ASBR generates a type 7 LSA ins tea. This type 7 LSA gets translated back into a type 5 by the NSSA ABR

Thursday, May 15, 2014

Nexus5K - DCBX Troubleshooting between Nexus5K to Netapp direct connection via FCoE

Problem Description


VFC interface status shows the interface is stuck at initializing for the VSAN. All other config appears to be right, vsan is bound to vlan, vlan is allowed on the link. Fabric interconnects appear to be flogi'd in appropriately.
sh flogi database shows flogi's in from all ports connected to the FI, but no Netapp ports. Ports from the FI are NOT port-channeled neither are the ports to the NetApp.


System Info 

N5K-C5596UP
5.1(3)N2(1)


Simplified Topology 

UCS --- Nexus5K --vfc33 (Eth2/8) -- trunk -- Netapp


Background of FCoE

Fibre Channel over Ethernet (FCoE) provides a method of transporting Fibre Channel traffic over a physical Ethernet connection. FCoE requires that the underlying Ethernet be full duplex and provides lossless behavior for Fibre Channel traffic. So things we need to verify are as follows :

fcoe_FIP-ladder.png

Troubleshooting Steps

Customer states two symptoms. First symptom is VSAN is stuck in initialization phase.


1.  VSAN is stuck in initialization phase

n5k_sh_int_vfc.png


2. No FLOGI learning for Netapp CNA adapter

- Below is WWPN for Netapp CNA adapter. The Netapp filer has FC Portname with 50:0a:09:83:8f:e7:7b:99
netapp_cna.png

- run "sh flogi database" from N5K and it shows flogi's in from all ports connected to the FI, but no Netapp ports are shown here :

n5k_sh_flogi.png

3. FCF Discovery or VLAN Discovery 
- Go back to "FIP - Login Flow ladder" diagram above. We see that there is no FLOGI learning for NetApp CNA but we can understand it is ether due to FCP discovery failure or VLAN discovery failure.

- Check the configuration again. General step to configure FCoE is start from enabling FCoE feature. Map a VSAN onto a VLAN. Then create virtual Fibre Channel (vfc) interfaces.

- Below is what we see from the Nexus5K.
n5k_config_vfc_ethernet.png

- The config looks good and we can see that vfc33 allows vsan 402 and bound to Ethernet2/8. However,  the interface vfc33 complains Trunk VSAN is not yet to be UP. 
n5k_sh_int_vfc.png


4. Trunk VSANS stuck at Initializing phase

- This indicates there is some issue in DCBX negotiationn but still it is not really clear what actually went wrong.

- Verify VLAN FCOE
sh_vlan_fcoe.png

5. Check DCBX negotiation


- DCBX is Data Center Bridge eXchange protocol. The FCoE switch and CNA adapter exchange capability information and configuration values via DCBX. 
- DCBX runs on the physical ethernet link between Nexus5K and CNA
- It uses LLDP as its transport layer to carry DCBX packets between DCBX capable devices

- From the DCBX spec,  PFC Feature is described on page 29 to 30.  This is a 16 bits structure, with 8 bits as Priority, followed by 8 bits as Number of Traffic Classes supported. For N5K, value is 0808.


sh_lldp_dcbx.png

- From above output we do not see 0808 for the Type 003 in local device (i.e. N5K). However, Netapp CNA adapter is good with Type 003. So in here at least we know this is not Netapp CNA issue. Something is not right in Nexus5K.



6. Further deep dive into DCBX output to understand what is really failing.

- Run "sh sys int dcbx info interface e2/8" and look for an error. This is fairly lengthy output so you have to read it carefully. Especially around "error" line. So try to use | grep error and quickly verify if there is any error. Then review the complete output.

- In this sample case, we do see some error in RED colour words in below output. This tells that there is some issue with PFC configuration. 


sh_sys_int_dcbx_info.png

Step 7. Check QoS config for PFC
- As the error indicate, we do see some problem with PFC. Generally speaking Nexus 5000 Series uses PFC to establish a lossless medium for the Fibre Channel payload in the FCoE implementation.

- Type "show ipqos" to verify PFC config part in Nexus5K.

sh_ipqos_new.png

- We can see that system Qos in yellow box does not have FCoE PFC.

- Add a following commands for FCoE :
add_PFC_qos_new.png

- As soon as we add above cmd, we can see some change in the interface status.

vfc_up_log.png

- Check "sh lldp dcbx interface e2/8" again and vrify "type 003" in local device (Nexus5k)

sh_lldp_dcbx_2_new.png

Note : Refer back to Step 5 explanation. From the DCBX spec,  PFC Feature is described on page 29 to 30.  This is a 16 bits structure, with 8 bits as Priority, followed by 8 bits as Number of Traffic Classes supported. For N5K, value is 0808.


- Check VFC interface again. Trunk VLAN state looks good now.

sh_int_vfc_2_new.png

- Lastly check the flogi database. We do see Netapp CNA port WWPN.

sh_flogi_data_2_new.png



Thursday, May 8, 2014

VSG deployment and Integration with VSM VNMC and vCenter

Introduction


Cisco Virtual Security Gateway (VSG) is a virtual firewall for Cisco Nexus 1000v Switches that delivers security and compliance for virtual computing environments. Cisco VSG uses virtual network service data path (vPath) technology embedded in the Cisco Nexus 1000V Series Virtual Ethernet Module (VEM). However, when you deploy the VSG, it can be overwhelm to understand which element is meant to interact with others. Also it can be huge obstacle when you troubleshoot any type of VSG issue.

So the purpose of this document is to explain the core components of VSG deployment and how they relates to each other. What needs to be configured and where it should be applied.

vsg_high_architecture.png




Solution Components

Virtual Network Management Center (VMNC)
- Cisco VNMC is a virtual appliance that provides centralized device and security policy management of the Cisco VSG.

Virtual Security Gateway (VSG)
- VSG operates with the Cisco Nexus 1000V Series distributed virtual switch in VMware vSphere hypervisor, and it uses the vPath embedded in the Nexus 1000V Series VEM.

Nexus1000V Switches
- Nexus 1000V Series Switches are virtual machine access switches that are an intelligent software switch implementation for VMware vSphere environments running the Cisco NX-OS Software operating system.

VMware vCenter
- VMware vCenter Server manages the vSphere environment and provides unified management of all the hosts and VMs in the data center from a single console.




Understanding of Communication between the devices

VNMC-to-vCenter Communication
- VNMC registers to vCenter to have visibility into the VMware environment. This allows the security administrator to define the policies based on the VMware VM attributes. VNMC integrates via an XML plug-in. The process is similar to the way the Cisco Nexus 1000V VSM integrates with vCenter. The communication between VNMC and vCenter takes place over a Secure Sockets Layer (SSL) connection on port 443


VNMC-to-VSG Communication
- VSG registers to VNMC via the policy agent configuration done on VSG. Once registered, VNMC pushes the security and device polices to VSG. No policy configuration is done via the VSG command-line interface (CLI) once it is registered to VNMC. The CLI is available to the administrator for monitoring and troubleshooting purposes. Communication between VSG and VNMC takes place over an SSL connection on port 443


VNMC-to-VSM Communication
- VSM registers to VNMC via the policy agent configuration done on VSM. The steps to register are similar to those for VSG-to-VNMC registration. Once registered, VSM will be able to send IP-to-VM binding to VNMC. IP-to-VM mapping is required by the VSG for evaluating policies that are based on VM attributes. VSM also resolves the security-profile-id using VNMC. This security-profile-id is sent in every vPath packet to VSG and is used to identify the policy for evaluation. The communication between VSG and VNMC takes place over an SSL connection on port 443


VSG-to-VEM (vPATH) Communication
- VSG receives traffic from VEM when protection is enabled on a port profile. The redirection of the traffic occurs via vPath. vPath encapsulates the original packet with the VSG’s MAC address and sends it to VSG. VSG has a dedicated interface (Data 0). VEM uses this interface to attain the VSG’s MAC address by performing Address Resolution Protocol (ARP) to that IP address. Cisco VSG is required to be Layer 2 adjacent to vPath. The mechanism used for communication between vPath and VSG is similar to that used for communication between VEM and the Cisco Nexus 1000V Series on a packet VLAN. VSG evaluates policies on the first packet of each flow that is redirected by vPath. VSG then transmits the policy evaluation results to vPath. vPath maintains the result in the flow table, and subsequent packets of the flow are permitted or denied based on the result cached in the flow table



VSG Setup requirements
VSG uses three vNICs
- Management : VNMC talks to vCenter, VSM, VSG via management VLAN.
- HA : Its' own VLAN is recommended.
- Data : N1K vPath and VSG communicate over this VLAN.

Installation and Initial Setup
1. Install the VNMC as a virtual appliance
2. Install the VSG as a virtual appliance
3. Register VSG to VNMC
vsg_vnm-pa.png

4. Register VSM to VNMC
vsm_vnm-pa.png

5. Register VNMC to vCenter


vCenter_vnmc_vsm.png


At VSM

1. Login to the VSM

2. Configure "port-profile". In this example, vsg_pp_tenant-anam" is the new port-profile we will use traffic redirection to VN service. This new port-profile should be seen from vCenter when you configure "Network Connection".

port-profile.png

3. Configure "vservice node". In this example "an-vsg" is the vservice mode name and service type is "VSG".

vservice.png


At vCenter

1. Login to vCenter and verify if this new port-profile is visible.

vCenter_port-profile.png


At VNMC

1. Login to VNMC


2. If your VSG is properly configured to talk to VNMC, you should be able to see the VSG under "Resource Management > Resources > Firewalls > All VSG". Confirm that the VSG shows up in this list. If it does not, resolve this issue by properly registering your VSG. In this example, VSG is shown as "an-vsg".





vnmc_an-vsg.png


Once VSG is properly registered as above, you are good to configure the security policies to control VM traffic.

UCS B - Disk failure Troubleshooting via IPMI

Introduction

This document describes several command-line interface (CLI) commands, as well as other troubleshooting techniques, that can help troubleshoot hard disk drive (HDD) issues in UCS B. In this document we discuss the fast and accurate method for troubleshooting HDD issues using IPMI sensor CLI output. Also post check up using show tech.



Troubleshooting Step


UCSM
-Start from UCSM and look for what Fault tab describes the failure
- Got to Equipment > Chassis > Server : Faults tab
- In below example, Local disk1 on server 4/1 complains drive fault.

ucsm_faults_tab.png



IPMI (Intelligent Platform Management Interface) sensor reading.
- You can check the status of the HDD from the IPMI sensor reading output. This method is very quick and useful when you do live troubleshooting. Here is the steps for how to.

Step1. connect to CIMC Debug Firmware Utility Shell

connect cimc <chassis/blade number>

 conn_cimc.png


Step2. type "sensors fault" as above and you now see disk status.

There are two Hard Disks in this case. Each disk has a different status. One is 0x2202 and the other 0x0101. 
What this means?


Code Interpretation :

Bit[15:10] - Unused
Bit[9:8]   - Fault
Bit[7:4]   – LED Color
Bit[3:0]   – LED State

Fault:
0x100 – On Line
0x200 - Degraded

LED Color:
0x10 – GREEN
0x20 – AMBER
0x40 – BLUE
0x80 – RED

LED State:
0x01 – OFF
0x02 – ON
0x04 – FAST BLINK 
0x08 – SLOW BLINK


Example :

0x0101
Fault : 0x100 indicates On line
LED status : 0x01 indicates OFF

So HDD1 is in normal state.

2. 0x2202   
Fault : 0x200 indicates Degraded
LED status: 0x02 indicates ON


So HDD2 is in degraded. This should be replaced. 


If you already have show tech, then look for a following output

1 "show fault" from sam_techsupportinfo

Severity: Major
Code: F0181
Last Transition Time: 2014-03-08T14:45:06.209
ID: 1263592
Status: None
Description: Local disk 1 on server 4/1 operability: inoperable. Reason: Firmware Detected Drive Fault    <<<<<----
Affected Object: sys/chassis-4/blade-1/board/storage-SAS-1/disk-1
Name: Storage Local Disk Inoperable
Cause: Equipment Inoperable
Type: Equipment
Acknowledged: No
Occurrences: 1
Creation Time: 2014-03-08T14:45:06.209
Original Severity: Major
Previous Severity: Major
Highest Severity: Major





2. obfl-log 

5:2014 Mar  8 14:44:59:2.1(1a):selparser:-: selparser.c:667: # 38 02 00 00 01 02 00 00 3A 92 1A 53 20 00 04 0D 97 00 00 00 7F 01 FF FF # 238 | 03/08/2014 14:44:58 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | LED is on | Asserted
5:2014 Mar  8 14:44:59:2.1(1a):selparser:-: selparser.c:667: # 39 02 00 00 01 02 00 00 3B 92 1A 53 20 00 04 0D 97 00 00 00 7F 05 FF FF # 239 | 03/08/2014 14:44:59 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | LED color is amber | Asserted
5:2014 Mar  8 14:44:59:2.1(1a):selparser:-: selparser.c:667: # 3A 02 00 00 01 02 00 00 3B 92 1A 53 20 00 04 0D 97 00 00 00 7F 09 FF FF # 23a | 03/08/2014 14:44:59 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | Degraded | Asserted



3. sel log

 # 38 02 00 00 01 02 00 00 3A 92 1A 53 20 00 04 0D 97 00 00 00 7F 01 FF FF # 238 | 03/08/2014 14:44:58 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | LED is on | Asserted
 # 39 02 00 00 01 02 00 00 3B 92 1A 53 20 00 04 0D 97 00 00 00 7F 05 FF FF # 239 | 03/08/2014 14:44:59 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | LED color is amber | Asserted
 # 3A 02 00 00 01 02 00 00 3B 92 1A 53 20 00 04 0D 97 00 00 00 7F 09 FF FF # 23a | 03/08/2014 14:44:59 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | Degraded | Asserted



4. /var/log/message

5:2014 Mar  8 14:44:59:2.1(1a):selparser:-: selparser.c:667: # 38 02 00 00 01 02 00 00 3A 92 1A 53 20 00 04 0D 97 00 00 00 7F 01 FF FF # 238 | 03/08/2014 14:44:58 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | LED is on | Asserted
5:2014 Mar  8 14:44:59:2.1(1a):selparser:-: selparser.c:667: # 39 02 00 00 01 02 00 00 3B 92 1A 53 20 00 04 0D 97 00 00 00 7F 05 FF FF # 239 | 03/08/2014 14:44:59 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | LED color is amber | Asserted
5:2014 Mar  8 14:44:59:2.1(1a):selparser:-: selparser.c:667: # 3A 02 00 00 01 02 00 00 3B 92 1A 53 20 00 04 0D 97 00 00 00 7F 09 FF FF # 23a | 03/08/2014 14:44:59 | CIMC | Drive slot(Bay) HDD0_INFO #0x97 | Degraded | Asserted


Tuesday, April 1, 2014

UCS B - VIF Paths : IOM NIF and HIF Troubleshooting


Introduction

We often talk about packet loss at HIF interface or NIF interface in IOM but how really this interface is tied with counter physical interface in FI and Blade? So the goal of this document is to help you to visualise the internal path from a blade to fabric interconnect via IOM or vice versa. Also, how to read these interface stats.

This document builds on from a previous document "UCS B - VIF Paths Understanding and troubleshooting".



Troubleshooting Steps

In this example we will look at the virtual circuit 1779. This virtual circuit connects vnic0 in a blade with interface vethernet 1779 in Fabric Interconnect.

1.ucsm_vif.png


What you see in UCSM above, we can visualise each interface element as below diagram. vnic0 automatically creates vethernet interface 1779 in Fabric Interconnect. It will maintains virtual circuit 1779 in orange dotted line. However, if there is any packet loss observed, we may need to look at some of stats  inside of the IOM.

Below is how to trace it. Basically we can start from "Server Port" in FI. This server port connect to NIF interface of IOM and HIF interface of IOM connects to vNIC0.

2.1.ucsb_infra.png

Next we need to identify the vNIC0 port. vNIC is shown as "FEX Host Port" in UCSM and in NXOS it shows as "Ethernet #/#/#" format in "show fex detail". So the FEX port is basically a blade interface representing vNIC. In this example as below, FEX Host Port 1/1/1. 

2.3.ucsm_vif_ports.png

If we redraw this, it should be like below diagram :
- Yellow interface in FI-A represents the "Server Port" which is E1/17.
- Yellow interface in a Blade represents the "Fex Host Port" which is E1/1/1.


2.2.ucsb_infra.png

With the server port E1/17 and the Fex host Port E1/1/1 we can identify HIF and NIF port in IOM. We need to connect NXOS and run a following command.

- run "show show system internal fex info satport ethernet 1/1/1"
- As below cmd output,we see that Eth1/1/1 connects to HIF7. This HIF7 is pinned to NIF3. The NIF3 connects to the server port E1/17.

fex_info_satport1.png

Now we can look at the IOM architecture and IOM cmd output to understand more clearly what is really happening for traffic flow.

3.ucsb_iom.png


From the Fabric Interconnect CLI, we need to connect IOM.

connect iom <chassis #>
show platform software redwood sts 

it shows a representation of exactly how the FEX is being used.
+ use "redwood" for IOM 2104. Use "woodside" for IOM 2104.
+ This command shows all the internal blade facing port known as "HIF - Host Interfaces" and all the network interface facing FI known as "NIF".
+ we see ASIC number as 0. This 0 will be used with rmon cmd later.


Check NIF interface stats to see if there is any outstanding error or drop facing to FI :

show platform software redwood rmon 0 nif3

iom_nif.png






Check HIF interface stats to see if there is any outstanding error or drop facing to blade :

iom_hif.png


In this example no packet error or drop is observed. However, if there is any loss or slow performance arised in UCS B Infrastructure, it is important to check and monitor NIF and HIF interface stats for IOM. 

Wednesday, February 12, 2014

UCS Director : How to collect log from Admin GUI?

CUIC logging mechanism is built into the product. There are two ways to check and collect the log


1. WEB GUI directly accessed via following URL

https://<CUIC IP Addr>/app/cloudmgr/cloupiaCARE/

or

From CUIC main menu go to "Administration" > "System Administration" > "Support Information"

cloudCARE.png

Note : System Information (Advanced) will give "Top Process" info, Memory info, System Task info. This is more useful when the system is experiencing some sort of performance issue.

2. SSH shelladmin

ssh shelladmin@<cuic ip address>
Default PW : changeme

shelladmin.png

DB backup and restore can be performed from shelladmin. Having good backup is very important practice.

Thursday, February 6, 2014

UCS B - Understanding and troubleshooting UCS B Infra VIF Paths

Background


In UCS B environment, many components are described as many different type of virtual component such as vnic, vnmic,vif, vethernet, virtual circuit, border port, uplink port, server port,  virtual cable, physical cable. It is not surprising that there can be confusion about what path packets are actually taking through the UCS infrastructure.

However, knowing the full data path through the UCS infrastructure is very critical to understand where to troubleshoot.


UCS B Infrastructure VIF Path Physical and Logical Architecture
1.2.ucsb_infra_vif_path.png



Green command output

At FI connect to nxos
     - Border interface will be connecting to uplink switch and we can see that Po2 is the Border Interface and connecting to uplink switch.
     - Run "show port-channel summary" cmd to see the member port. We can see the member port is Eth1/1 and Eth1/2.
     - Run "show cdp neighbour interface eth1/1" and "show cdp neighbour interface eth1/2" and we can understand the Eth1/1 is connecting to uplink switch Nexus5K-1 and the Eth1/2 connecting to Nexus5k-2.

2.sh_pinng_border.png

3.sh_portchannel_sum.png

4.sh_cdp_ne.png




Orange command output

Return to FI console from NXOS and run " show service-profile circuit server 1/1". This cmd will give you "virtual circuit" information. Next key concept to understand is that whenever vNIC on a Cisco CNA like the Virtual Interface Card (VIC), this automatically creates the corresponding virtual ethernet port on the fabric interconnects (On both FI's if fabric failover is enabled) and connects the vethernet to the vNIC with a virtual cable as shown above, this creates a Virtual Network Link (VN-Link).

Here is the cmd output
5.sh_service-profile_circuit.png

Go to UCSM > Servers > select blade "Service Profiles" > VIF Paths
     + So "Virtual Circuit 1740" is created between "vnic0" and "vif 1740". This VIF 1740 shows as interface vethernet 1740 in FI NXOS.


6.ucsm_vif.png

Return to NXOS and run "show run interface vethernet 1740". Now we see that Vethernet 1740 is referenced to server 1/1 and configured as trunk port. What vlan is allowed through. More importantly this Vethernet 1740 is bound to Ethernet1/1/1. The first 1 refer to chassis number. Ignore second 1 and the last 1 is server number.

7.sh_run_int_veth1740.png

Explorer more about the Ethernet 1/1/1 by "show run interface ethernet1/1/1"
     - This output tells that if traffic leaves this interface E1/1/1, we will do "vntag" and will send out to fabric-interface Eth1/17. This Eth1/17 is the "Server port" you defines in UCSM FI.

8.sh_run_int_eth1:1:1.png

The "Server Port" must show "switchport mode" as "fex-fabric". It also tells this server port is associated with FEX (IOM).


9.1.sh_run_int_eth1:17.png

Also run "show pinning server-interface". This output may confuse you but it shows virtual interface as well as physical interface that faces the server.

9.2.sh_pin_server-int.png




Yellow command output

Next cmd you will use is "show fed detail".
- Fex Port is basically server. This Fex port is connected to Fabric port in Fabric Interconnect.

10.sh_fex_De.png

- Next cmd you need to understand is "show interface fex-fabric" cmd.
     + This cmd tells you FI server port (fabric port) is physically connected to IOM port. So Eth1/17 is connected to Fex (IOM) port 1, Eth1/18 is port 2, Eth1/19 is 3 and Eth1/20 is 4 accordingly.


11.sh_int_fex-fabric.png



Blue command output

- Go to UCSM and browse to DCE interface and check the mac address of DCE 1 interface. We can confirm DCE interface is Ethernet1/1/1.

ucsm_server1_dce1.png

- Returm to FI NXOS and check mac address with "show interface e1/1/1" cmd and "show mac address | inc 9a" cmd

sh_int_e1:1:1.png

sh_mac_address_dce.png





At Esxi Server

ssh into ESXi host running in blade 1/1 and run "esxcfg-vnic -l" cmd. You can see 6 vNICs. In ESXi hypervisor, vNIC is called as "VMNIC".

12.esxci-nics.png

return to NXOS and you can confirm that mac address of vmnic1 belongs to the Veth1740.

13.sh_mac_addr.png