Since negotiation process. First, when using extit{Dynamic Host

Since the environment of our design is surrounded by the CORD architecture, an evaluation of the packet flow for different services must be exposed in order to visualize the way that the services are requested. As explained before, the virtual BRAS network function will be deployed into white box servers of the CORD infrastructurefootnote{According to the NFV Framework and Scope, this hardware has the ability to provide computing resources, storage, and physical/virtual networking.}. Thereby, our VNF connects physically to one or more leaf switches which belong to the Leaf-Spine switch structure promoted by CORD. For the representation of the packet flow of our system, we assume that the customer premise equipment (CPE) is collected only under one leaf switch, using it as the input platform for the analysis of the packet flow.

This analysis is split into two parts, namely initialization, and establishment. The former comprehends a typical description for the first access service connection, the provisioning of customer resources and the enforcement of service constraints, while the latter refers to internal or external services requested by the user e.g. internet access. Due to the lack of information provided by the CORD regarding the network implementation and the freedom that offer the implementation of applications over SDN controllers, we propose a solution for handling the packets with respect to some considered protocols. The general scenario for the subsequent analysis is presented in figure ef{fig:Scenario_Packet_flow_analysis} and it will be adapted through the course of this section in order to evidence different considerations taken towards a proper operation of the system.

Sometimes it is hard to do all the work on your own
Let us help you get a good grade on your paper. Get expert help in mere 10 minutes with:
  • Thesis Statement
  • Structure and Outline
  • Voice and Grammar
  • Conclusion
Get essay help
No paying upfront

 Next, the initialization scenario is presented. It includes two different scenario negotiations which depend on the subjacent technology. We describe the principal header fields involved in the negotiation process. First, when using extit{Dynamic Host Configuration Protocol (DHCP)} with option 82, the virtual BRAS network function must be prepared to receive incoming connections. This involves a four-message protocol carried out between the system and the CPE.

The CPE is, therefore, the entity in charge of initiate a DHCP connection by means of a extit{DHCP discovery packet}. The packet sent by the CPE is composed of a broadcast IP destination and an undefined source address. It uses UDP as transport protocol whose source and destination port are number 68 and 67 respectively. The MAC address of the CPE is used as an identification mean. This information is carried as part of the header of the discovery DHCP packet. This protocol allows the request of certain features which are negotiated during the initialization process. They are specified in the options field of the mentioned packet. Although many of these options can be used for the initialization process we assume that each CPE executes a request of an IP address avoiding any other specific case for simplicity during the analysis.

Thus, the broadcast packet is received for one of the interfaces of a leaf switch, which for the infrastructure presented in figure ef{fig:Scenario_Packet_flow_analysis} refers to the leaf switch number 1. As described before, this switch must install flows into its flow table in order to handle properly the customer’s traffic. This configuration behaves according to the configuration established in the controller. Therefore, a flow that summarizes the DCHP discovery packets regardless of the customer identification is preferred. To this end, it is proposed to assign a rule which matches the destination IP address, the transport protocol, and the destination port used. This will capture all request stemmed from customers and requesting for IP address configuration employing DHCP as the underlying protocol. Primarily restricting the scenario to IPv4 the matches are respectively destination address equal to a broadcast address (255.255.

255.255), used transport protocol equal to UDP and destination port equal to 67footnote{This can be replaced according to the application specific settings of the system or due to design considerations.} As a result, an action to the traffic which is matched in this rule is required. It is needed to steer the traffic towards the leaf switch that handles the hardware of our virtualized system which for the case of the shown infrastructure corresponds to leaf switch number 2. %Here I include Segment routing for the switching fabric.In the design notes update of the CORD white paper citep{peterson2016central}, it is mentioned that the infrastructure implements extit{segment routing} which basically refers to aggregate flows between servers across the switching fabric. Herefrom it is desired to adapt this ability to our environment since it reduces the process to establish different paths among the switching elements.

For this purpose, a series of labels are designed to determine the traffic forwarding across the switch fabric. These labels are chained one after another to indicate the path that the traffic follows, leaving at the end switch element the packet to be forwarded to the end system. In the case exposed in figure ef{fig:Scenario_Packet_flow_analysis} the access traffic once is received in leaf-switch number 1 and directed to leaf-switch number 2 the traffic can follow 3 different paths, namely L1-S1-L2, L1-S2-L2, L1-S3-L2, where L refers to a leaf switch and S to a spine switch.

This notation for leaf-switch (LX) and spine-switch (SX) is used hereinafter. Thus, different labels, defined by the network controller, determine which of the paths must be used. For the representation of the segment routing protocol in the proposed scenario, one label with the name extit{SR\_LX} represent the leaf switch destination intended for the traffic, using X as the index to identify the corresponding destination leaf switch.

We assume that the segment routing label chain is included as part of the layer two header right after the destination and source MAC addresses.For origin identification purposes, the packet must also include a label tag which indicates the leaf ingress switch from where the request has been first received. This tag label is placed after the segment routing table and it will function as reverse path boundary for the triggered broadcast traffic.

Once the discovery DHCP packet reaches L2, it has to be sent towards the BRAS data plane whose connection port must be previously defined. A similar flow as the defined in L1 must be installed in L2, which matches for the broadcast traffic, UDP as the transport protocol and destination port equal as the one defined before. Additionally, is required to compare whether the ingress port does not correspond to one of the ports connecting with the BRAS data path elements. If that is fulfilled, an action which steers the packet out of that port is needed.

The DCHP packet discovery is displayed in figure ef{fig:DHCP_discovery_packet}.The virtual BRAS system will receive the packet from the switch in one of the data path elements. It is processed and replies to the subscriber accordingly. For this, it replies with a DHCP offer packet to the respective CPE which is identified through the source MAC address attached to the discovery message.

The answer to the client contains the IP address offered as well as additional configuration parameters to allow the traffic of the client in the network. Thus, the packet delivered back to the switch is the same as the original, nonetheless the source IP address contains the one given to the system and the source and destination transport protocol are interchanged. The packet is received for a second port in the leaf switch and a rule flow that match the origin tag, the source IP and the ingress port are used.

For the figure used as a reference, these fields have to be equal to the L1 origin tag, B IP address and port P2 respectively. The respective action to the previous match corresponds to steer the packet towards L1 employing segment routing again for the transmission path. When the packet arrives at L1, it is required to identify the L1 origin tag and the source IP address from the BRAS system to confirm that the packet has to be forwarded towards the customers through the port connecting to the access equipment aggregator. Then, it broadcasts the request and the customer’s CPE receives the DHCP offer packet.

  The CPE can accept or deny the offered configuration parameters whether one or the other is irrelevant for the packet processing regarding the CORD architecture. Hence, here we present the case of a DCHP request response. The header fields are the same as in first place for the discovery packet, however, in here is included a field in the payload of the transport protocol including the server address IP (B IP address).

This packet is broadcasted in the network and once received in the L1 switch in port P1, it follows the same processing and forwarding as for the discovery packet. Thus the DHCP request reaches the virtual BRAS system which replies with a DHCP acknowledge packet. Likewise, it follows the processing, header fields changes and forwarding behavior as the DHCP offer packet. Figure ef{fig:DHCP_offertoack_packet} shows the offer and acknowledge packet processing involving the CORD architecture.

Additionally to the processing performed by the virtual BRAS system, it sends an update towards the controller of the spine-leaf switch farm in order to inform a correspondence between the IP address pool which handles the subscriber recently configured and a service tag that will enable a native differentiation for subscribers and a BRAS data path element decided to attend the respective subscriber. In addition, this allows the summarization of flows in the switch fabric so that is not required to overcrowd the rule number on the switches as the number of subscribers increase. The information collected by the OpenFlow controller is vital for the future processing of incoming packets so that not only rules can be summarized and applied to a group of subscribers but also they can be proactively installed in such a way that for instance packets directed towards external networks e.g. internet, can be processed and forwarded directly by the leaf-spine switch fabric without requiring to make use of the slow path i.e. avoiding the communication with the controller. Three types of proactive configuration are performed depending on the switch function on the architecture i.

e. the access leaf switch, the intermediate switches and the core switches receive different configuration. In principle, this must lead to reducing the latency of the system as well as improving the performance of the service which is reflexed in a better quality of service. The process proposed is based on the provision of service tags which represent two things, first the type of service provided and second the direction of the traffic. This means that a service which is originated from the subscriber’s CPE and heading to an external network will be handed a service tag which will be used along the switch infrastructure to simplify the processing and expedite the forwarding. Nevertheless, the reply of the previous request will be handled with another service tag indicating that is processing towards the subscribers. It is important to notice that the usage of the before mentioned service tags are highly dependent on the grouping of users since one service tag can be used by the traffic of multiple users which correspond to the same cluster.

If this is not taken into account and a service tag handles the traffic of a single user a couple of issues arise, principally, the tags might not be enough for the total number of subscribers and also no summarization of rules is achieved on the switches which may not result in an improvement in the performance of the system processing and forwarding. Therefore we will further study the grouping of sessions and the clustering size, give recommendations about appropriate considerations to have in mind according to the utilization of network resources in a per-user basis as well as the traffic characteristics. This is presented later on and correspond to the main contribution of this thesis.Next, it is presented how the proactively installed rules work in the scenario of an access request from a subscriber which targets an external service. This service request is required to be handled by the virtual BRAS network function before being forwarded towards the core of the network.

Bearing in mind that the deployment of this VNF is done across the server infrastructure of the CORD architecture, the communication must reach the respective server and then it decides whether or not the packets are forwarded to the destination. To extensively clarify the architecture operation based on the work mechanism proposed, the packet processing of a single platform implementation (only one BRAS datapath element used) is first detailed. Afterwards, we expose the operation case for two active BRAS data path elements. To this end, we refer as DP\_1 to the server who allocates the resources of the virtual BRAS network function for the case of unique platform implementation. In case of two platform implementation, DP\_1 and DP\_2 are used as a reference.

Thus, a preliminary view of the end to end service is achieved in four general parts: Note that DP\_X refers to the server which contains the virtualized network function with X specifying the number of the BRAS data path element. The previous four-stepped service partition is used onwards for figure presentation purposes meaning that the infrastructure operation’s description goes along with four figures which follow the given order. The packet processing across the infrastructure in the case of single platform implementation (only DP\_1) from a customer requesting an external service proceed as follows. The CPE triggers a request with destination IP address equal to IP\_S1 as presented previously in figure ef{fig:Scenario_Packet_flow_analysis} and who also possess a preconfigured customer tag (C\_TAG).

This request enters through the access aggregation platform which forwards the request towards L1. L1 receives the packet in its port P1 and it analyzes the packet’s header in which two operations are expected based on the proactive configuration performed at the conclusion of the DHCP negotiation. First, the access control is confirmed such that if not correspond to one of the allocated customer IP addresses nor the DHCP packet, it must be dropped. For the case of successfully authorized traffic, L1 needs to identify its IP pool as well as the destination IP address, which in this case it is assumed that corresponds to a locally undefined IP address and it is set as a default destination.

As a resulting action is required to add a service tag (S\_TAG) which indicates, as explained before, the direction of the service (towards the core network) and the group of IP addresses. Thus, it is identified the objective BRAS platform which for this case is assumed to be placed under the switch L2. Hence, an according to action must indicate to forward the traffic to switch L2 which is done using segment routing.

When the packet is received at L2 for one of the ports connected to the spine switches, a flow matches the S\_TAG and the ingress ports. The subsequent action must send out to the port which heads to the ingress port of the DP\_1. The DP\_1 is configured to process the packet of the subscriber which basically pops up the C\_TAG from the packet and if no traffic restriction for the customer is found, an output message is sent to the secondary port connection with L2. This process comprehends the first part of the service packet processing which is observed in figure ef{fig:Packet_comm_CPE_to_DP_X}.Note that the service tag S\_TAG is preserved and is employed to match the incoming packets at the port which connects to the output of DP\_1. If so, it must redirect the packet towards a core switch which for this case corresponds to L4. The forwarding of the packet is done by using segment routing pointing to the previously mentioned switch.

In the flow table of L4, a proactively installed flow matches the S\_TAG at the packet reception and removes it. Consequently, L4 steers the packet to the metro-core platform which uses classical protocols as configured by the provider and which is beyond of the view of the CORD architecture and therefore is considered unimportant for the current clarification. For simplicity, it is assumed as a pure IP packet which is forwarded until reaching S1. This process comprehends the second part of the service packet processing and it is shown in the figure ef{fig:Packet_comm_DP_X_to_S1}.The packet now is returned from S1 with destination and received at the metro-core platform which forwards the packet to the CORD leaf switch settled for core services which in this case corresponds to L4. At the reception of the packet at L4, a flow must match with the destination address that belongs to the CPE of the subscriber. This rule is established proactively for a pool of IP addresses that correspond to one or several authorized users in which the referred session is among them.

The respective action for this rule must add a new S\_TAG to the header’s packet which must be different than the previously employed so that the switch fabric is able to identify that the service flow is in the direction of the access network. Additionally, as part of the action of the mentioned flow, L4 must redirect the packet towards L2 where DP\_1 is deployed. Segment routing is used again to reach leaf switches with each other. When the packet arrives L2 a rule is set to evaluate whether the ingress port belongs to one of the ports connected to the spine switches as well as the new correspondence of the S\_TAG and correspondingly, its action steers the packet out to the port connecting to DP\_1. Subsequently the packet is processed by DP\_1 which if no restriction is encountered, the respective C\_TAG is added to the packet and forwarded back to L2. This process description corresponds to the third part of the service packet processing and it can be appreciated in figure ef{fig:Packet_comm_DP_X_to_S1}.Finally, we enter into the fourth step of the process in which the packet travels from the DP\_1 towards the CPE.

The packet arrives L2 from DP\_1 and using the combination of S\_TAG and ingress port, which corresponds to the particular connection to DP\_1, the packet is matched and forwarded in direction of L1. After applying segment routing to reach L1, it must match the packet based on the S\_TAG. As a resultant action L1 removes S\_TAG from the header’s packet and send it out using the port that connects to the access aggregator platform which consequently redirects the packet towards the CPE, closing so, the end to end communication process. The last part of the communication is exhibited in figure ef{fig:Packet_comm_DP_X_to_CPE}.


I'm Gerard!

Would you like to get a custom essay? How about receiving a customized one?

Check it out