1-isp-network-design.pdf

April 2, 2018 | Author: Sourav Satpathy | Category: Router (Computing), Computer Network, Internet Service Provider, Ip Address, Routing


Comments



Description

ISP Network DesignISP Training Workshops 1 ISP Network Design p  PoP Topologies and Design p  Backbone Design p  Addressing p  Routing Protocols p  Security p  Out of Band Management p  Operational Considerations 2 Point of Presence Topologies 3 PoP Topologies p  Core routers – high speed trunk connections p  Distribution routers and Access routers – high port density p  Border routers – connections to other providers p  Service routers – hosting and servers p  Some functions might be handled by a single router 4 . PoP Design p  Modular Design p  Aggregation Services separated according to connection speed n  customer service n  contention ratio n  security considerations n  5 . Modular PoP Design Other ISPs ISP Services (DNS. News. WWW) Web Cache Hosted Services & Datacentre Backbone link to another PoP Backbone link to another PoP Network Core Consumer cable. FTP. xDSL and wireless Access Consumer DIal Access Leased line customer aggregation layer MetroE customer aggregation layer Network Operations Centre Channelised circuits for leased line circuit delivery GigE fibre trunks for MetroE circuit delivery 6 . Mail. Modular Routing Protocol Design Smaller ISPs p  Modular IGP implementation IGP “area” per PoP n  Core routers in backbone area (Area 0/L2) n  Aggregation/summarisation where possible into the core n  p  Modular iBGP implementation BGP route reflector cluster per module n  Core routers are the route-reflectors n  Remaining routers are clients & peer with route-reflectors only n  7 . Modular Routing Protocol Design Larger ISPs p  Modular IGP implementation IGP “area” per module (but avoid overloading core routers) n  Core routers in backbone area (Area 0/L2) n  Aggregation/summarisation where possible into the core n  p  Modular iBGP implementation BGP route reflector cluster per module n  Dedicated route-reflectors adjacent to core routers n  Clients peer with route-reflectors only n  8 . Point of Presence Design 9 . medium numbers n  10 .PoP Modules p  Low Speed customer connections PSTN/ISDN dialup n  Low bandwidth needs n  Low revenue. large numbers n  p  Leased line customer connections E1/T1 speed range n  Delivery over channelised media n  Medium bandwidth needs n  Medium revenue. large numbers n  p  MetroE & Highband customer connections Trunk onto GigE or 10GigE of 10Mbps and higher n  Channelised OC3/12 delivery of E3/T3 and higher n  High bandwidth needs n  High revenue. Cable and Wireless n  High bandwidth needs n  Low revenue.PoP Modules p  Broad Band customer connections xDSL. low numbers n  11 . PoP Modules p  PoP Core Two dedicated routers n  High Speed interconnect n  Backbone Links ONLY n  Do not touch them! n  p  Border Network Dedicated border router to other ISPs n  The ISP’s “front” door n  Transparent web caching? n  Two in backbone is minimum guarantee for redundancy n  12 . PoP Modules p  ISP Services DNS (cache. proxy. WWW (server. Anti-virus/anti-spam) n  WWW (server. cache) n  Information/Content Services n  Electronic Commerce n  13 . proxy. secondary) n  News (still relevant?) n  Mail (POP3. Relay. cache) n  p  Hosted Services/DataCentres Virtual Web. PoP Modules p  Network Operations Centre Consider primary and backup locations n  Network monitoring n  Statistics and log gathering n  Direct but secure access n  p  Out n  of Band Management Network The ISP Network “Safety Belt” 14 . Low Speed Access Module Web Cache Access Network Gateway Routers Primary Rate T1/E1 Access Servers PSTN lines to modem bank To Core Routers PSTN lines to built-in modems TACACS+/Radius proxy. DNS resolver. Content 15 . 56/64K and nx64K circuits 16 .Medium Speed Access Module Aggregation Edge Channelised T1/E1 64K and nx64K circuits To Core Routers Mixture of channelised T1/E1. High Speed Access Module Aggregation Edge Metro Ethernet Channelised T3/E3 To Core Routers Channelised OC3/OC12 17 . Content 18 . DHCP. TACACS+ or Radius Servers/Proxies. ATM Cable RAS Access Network Gateway Routers To Core Routers The cable system SSG.Broadband Access Module Web Cache Telephone Network DSLAM BRAS IP. DNS resolver. ISP Services Module To core routers Service Network Gateway Routers WWW cache DNS POP3 secondary Mail Relay NEWS DNS cache 19 . Hosted Services Module To core routers Hosted Network Gateway Routers Customer 1 Customer 3 Customer 5 Customer 7 Customer 2 Customer 4 Customer 6 20 . Border Module To local IXP NB: router has no default route + local AS routing table only ISP1 ISP2 Network Border Routers To core routers 21 . Database and Accounting Systems 22 .NOC Module Critical Services Module To core routers Corporate LAN Out of Band Hosted Network Gateway Routers Management Network Firewall 2811/32async NetFlow TACACS+ SYSLOG Primary DNS Analyser server server Network Operations Centre Staff Billing. Out of Band Network Out of Band Management Network Router consoles Terminal server To the NOC NetFlow enabled routers NetFlow Collector Out of Band Ethernet 23 . Backbone Network Design 24 . T3/E3. OC12.… delivery n  Easily upgradeable bandwidth (CIR) n  Almost vanished in availability now n  25 . GigE. OC192. OC48. T1/E1. OC3. OC768 p  ATM/Frame Relay service from telco T3. 10GigE.Backbone Design p  Routed Backbone p  Switched Backbone n  Virtually obsolete p  Point-to-point n  circuits nx64K. OC12. OC3. Distributed Network Design p  PoP n  design “standardised” operational scalability and simplicity p  ISP essential services distributed around backbone p  NOC and “backup” NOC p  Redundant backbone links 26 . Distributed Network Design Customer connections ISP Services Backup Operations Centre POP Two Customer connections Customer connections ISP Services POP One POP Three ISP Services External connections Operations Centre External connections 27 . Backbone Links p  ATM/Frame Relay Virtually disappeared due to overhead. and shared with other customers of the telco n  MPLS has replaced ATM & FR as the telco favourite n  p  Leased Line/Circuit Most popular with backbone providers n  IP over Optics and Metro Ethernet very common in many parts of the world n  28 . extra equipment. Long Distance Backbone Links p  These usually cost more p  Important to plan for the future This means at least two years ahead n  Stay in budget. stay realistic n  Unplanned “emergency” upgrades will be disruptive without redundancy in the network infrastructure n  29 . meaning they have no spare capacity at all!! 30 .Long Distance Backbone Links p  Allow sufficient capacity on alternative paths for failure situations Sufficient can depend on the business strategy n  Sufficient can be as little as 20% n  Sufficient is usually over 50% as this offers “business continuity” for customers in the case of link failure n  Some businesses choose 0% n  p  Very short sighted. Long Distance Links POP Two Long distance link POP One POP Three Alternative/Backup Path 31 . Metropolitan Area Backbone Links p  Tend to be cheaper Circuit concentration n  Choose from multiple suppliers n  p  Think big More redundancy n  Less impact of upgrades n  Less impact of failures n  32 . Metropolitan Area Backbone Links POP Two Metropolitan Links POP One POP Three Metropolitan Links Traditional Point to Point Links 33 . Upstream Connectivity and Peering 34 . no more than three .Transits p  Transit provider is another autonomous system which is used to provide the local network with access to other networks n  n  p  Might be local or regional only But more usually the whole Internet Transit providers need to be chosen wisely: n  Only one p  n  Too many p  p  p  p  no redundancy more difficult to load balance no economy of scale (costs more per Mbps) hard to provide service quality Recommendation: at least two. Common Mistakes p  ISPs sign up with too many transit providers n  n  n  p  Lots of small circuits (cost more per Mbps than larger ones) Transit rates per Mbps reduce with increasing transit bandwidth purchased Hard to implement reliable traffic engineering that doesn’t need daily fine tuning depending on customer activities No diversity n  n  Chosen transit providers all reached over same satellite or same submarine cable Chosen transit providers have poor onward transit and peering . where providers meet and freely decide who they will interconnect with Recommendation: peer as much as possible! .Peers p  p  A peer is another autonomous system with which the local network has agreed to exchange locally sourced routes and traffic Private peer n  p  Public peer n  p  Private link between two providers for the purpose of interconnecting Internet Exchange Point. Common Mistakes p  Mistaking a transit provider’s “Exchange” business for a no-cost public peering point p  Not working hard to get as much peering as possible Physically near a peering point (IXP) but not present at it n  (Transit sometimes is cheaper than peering!!) n  p  Ignoring/avoiding competitors because they are competition n  Even though potentially valuable peering partner to give customers a better experience . Private Interconnection p  Two service providers agree to interconnect their networks They exchange prefixes they originate into the routing system (usually their aggregated address blocks) n  They share the cost of the infrastructure to interconnect n  Typically each paying half the cost of the link (be it circuit, satellite, microwave, fibre,…) p  Connected to their respective peering routers p  n  Peering routers only carry domestic prefixes 39 Private Interconnection Upstream Upstream ISP2 PR PR ISP1 p  PR = peering router n  n  n  n  p  Runs iBGP (internal) and eBGP (with peer) No default route No “full BGP table” Domestic prefixes only Peering router used for all private interconnects40 Public Interconnection p  Service provider participates in an Internet Exchange Point It exchanges prefixes it originates into the routing system with the participants of the IXP n  It chooses who to peer with at the IXP n  Bi-lateral peering (like private interconnect) p  Multi-lateral peering (via IXP’s route server) p  It provides the router at the IXP and provides the connectivity from their PoP to the IXP n  The IXP router carries only domestic prefixes n  41 Public Interconnection Upstream ISP6-PR ISP5-PR IXP ISP4-PR ISP3-PR p  ISP1 ISP2-PR ISP1-PR = peering router of our ISP n  n  n  n  p  ISP1-PR Runs iBGP (internal) and eBGP (with IXP peers) No default route No “full BGP table” Domestic prefixes only Physically located at the IXP 42 . no full BGP table) Filtering of BGP announcements from IXP peers (in and out) Provision of a second link to the IXP: n  n  (for redundancy or extra capacity) Usually means installing a second router p  p  Connected to a second switch (if the IXP has two more more switches) Interconnected with the original router (and part of iBGP mesh) 43 .Public Interconnection p  The ISP’s router IXP peering router needs careful configuration: n  n  n  n  p  It is remote from the domestic backbone Should not originate any domestic prefixes (As well as no default route. Public Interconnection Upstream ISP6-PR ISP1-PR2 ISP5-PR IXP ISP4-PR ISP3-PR p  ISP1-PR1 ISP1 ISP2-PR Provision of a second link to the IXP means considering redundancy in the SP’s backbone n  n  n  Two routers Two independent links Separate switches (if IXP has two or more switches) 44 . plentiful. etc p  Each scenario has different considerations which need to be accounted for 45 . easy to provision. and easily upgraded Transit provider is a long distance away p  Over undersea cable.Upstream/Transit Connection p  Two n  Transit provider is in the locality p  n  scenarios: Which means bandwidth is cheap. satellite. long-haul cross country fibre. Local Transit Provider ISP1 AR BR Transit p  BR = ISP’s Border Router n  n  n  n  Runs iBGP (internal) and eBGP (with transit) Either receives default route or the full BGP table from upstream BGP policies are implemented here (depending on connectivity) Packet filtering is implemented here (as required) 46 . etc Does not originate any domestic prefixes 47 .Distant Transit Provider AR1 Transit BR ISP1 AR2 p  BR = ISP’s Border Router n  n  n  n  n  Co-located in a co-lo centre (typical) or in the upstream provider’s premises Runs iBGP with rest of ISP1 backbone Runs eBGP with transit provider router(s) Implements BGP policies. packet filtering. so the router allows the ISP to implement appropriate filtering first n  Moves the buffering problem away from the Transit provider n  Remote co-lo allows the ISP to choose another transit provider and migrate connections with minimum downtime n  48 .Distant Transit Provider p  Positioning a router close to the Transit Provider’s infrastructure is strongly encouraged: Long haul circuits are expensive. etc as instructed) n  Appropriate support contract from equipment vendor(s) n  Sensible to consider two routers and two longhaul links for redundancy n  49 . replace equipment. power cycle equipment.Distant Transit Provider p  Other points to consider: Does require remote hands support n  (Remote hands would plug or unplug cables. Distant Transit Provider AR1 Transit BR1 AR2 p  ISP1 BR2 Upgrade scenario: n  n  n  Provision two routers Two independent circuits Consider second transit provider and/or turning up at an IXP 50 . Summary p  Design n  Private interconnects p  n  Router co-lo at an IXP Local transit provider p  n  Simple private peering Public interconnects p  n  considerations for: Simple upstream interconnect Long distance transit provider p  Router remote co-lo at datacentre or Transit premises 51 . Addressing 52 . arin.net/info/ncc 53 .net North America n  p  AfriNIC – http://www.apnic.ripe.net LACNIC – http://www.net Europe and Middle East n  RIPE NCC – http://www.net Latin America and the Caribbean n  p  APNIC – http://www.afrinic.lacnic.Where to get IP addresses and AS numbers p  p  Your upstream ISP Africa n  p  Asia and the Pacific n  p  ARIN – http://www. Internet Registry Regions 54 . more specific details are on the individual RIR website There is no more IPv4 address space at IANA n  n  n  Most RIRs are now entering their “final /8” IPv4 delegation policies Limited IPv4 available IPv6 allocations are simple to get in most RIR regions 55 .Getting IP address space p  Take part of upstream ISP’s PA space or p  Become a member of your Regional Internet Registry and get your own allocation n  n  p  Require a plan for a year ahead General policies are outlined in RFC2050. com/Documents/bogonlist.html 56 .txt NAT used to translate from private internal to public external addressing Allows the end-user network to migrate ISPs without a major internal renumbering exercise Most ISPs filter RFC1918 addressing at their network edge n  http://www.cymru.ietf.What about RFC1918 addressing? p  RFC1918 defines IP addresses reserved for private Internets n  n  p  Commonly used within end-user networks n  n  p  Not to be used on Internet backbones http://www.org/rfc/rfc1918. What about RFC1918 addressing? p  List of well known problems with this approach for an SP backbone: n  n  n  n  Breaks Path MTU Discovery Potential conflicts with usage of private addressing inside customer networks Security through obscurity does not provide security Troubleshooting outside the local network becomes very hard p  p  n  Troubleshooting of connectivity issues on an Internet scale becomes impossible p  p  n  Router interface addresses are only locally visible Internet becomes invisible from the router Traceroutes and pings provide no information No distinction between “network invisible” and “network broken” Increases operational complexity of the network infrastructure and routing configuration 57 . Private versus Globally Routable IP Addressing p  Infrastructure Security: not improved by using private addressing n  p  Troubleshooting: made an order of magnitude harder n  n  p  p  Still can be attacked from inside. or by reflection techniques from the outside No Internet view from routers Other ISPs cannot distinguish between down and broken Performance: PMTUD breakage Summary: n  ALWAYS use globally routable IP addressing for ISP Infrastructure 58 . or from customers. not historic classful boundaries Similar allocation policies should be used for IPv6 as well n  ISPs just get a substantially larger block (relatively) so assignments within the backbone are easier to make 59 .Addressing Plans – ISP Infrastructure p  p  Address block for router loop-back interfaces Address block for infrastructure n  n  n  p  Per PoP or whole backbone Summarise between sites if it makes sense Allocate according to genuine requirements. Addressing Plans – Customer p  Customers are assigned address space according to need p  Should not be reserved or assigned on a per PoP basis ISP iBGP carries customer nets n  Aggregation not required and usually not desirable n  60 . 10.0.255 /24 Infrastructure Loopbacks Customer assignments p  Phase Two 223.Addressing Plans – ISP Infrastructure p  Phase One 223.10.10.0/21 223.0.10.0/20 223.255 /24 /24 Original assignments 223.0.0.10.255 New Assignments 61 .10.15.1 223.5.10.6.1 223. Addressing Plans Planning p  Registries will usually allocate the next block to be contiguous with the first allocation Minimum allocation could be /21 n  Very likely that subsequent allocation will make this up to a /20 n  So plan accordingly n  62 . Addressing Plans (contd) p  Document n  infrastructure allocation Eases operation. debugging and management n  Submit network object to RIR Database n  63 . debugging and management p  Document customer allocation Contained in iBGP n  Eases operation. Routing Protocols 64 . Routing Protocols p  IGP – Interior Gateway Protocol carries infrastructure addresses. n  p  EGP – Exterior Gateway Protocol carries customer prefixes and Internet routes n  current EGP is BGP version 4 n  p  No connection between IGP and EGP 65 .. ISIS. point-to-point links n  examples are OSPF... Why Do We Need an IGP? p  ISP backbone scaling Hierarchy n  Modular infrastructure construction n  Limiting scope of failure n  Healing of infrastructure faults using dynamic routing with fast convergence n  66 . Why Do We Need an EGP? p  Scaling to large network Hierarchy n  Limit scope of failure n  p  Policy Control reachability to prefixes n  Merge separate organizations n  Connect multiple IGPs n  67 . Interior versus Exterior Routing Protocols p  Interior n  n  n  n  Automatic neighbour discovery Generally trust your IGP routers Prefixes go to all IGP routers Binds routers in one AS together p  Exterior n  n  n  n  Specifically configured peers Connecting with outside networks Set administrative boundaries Binds AS’s together 68 . Interior versus Exterior Routing Protocols p  Interior n  n  Carries ISP infrastructure addresses only ISPs aim to keep the IGP small for efficiency and scalability p  Exterior n  n  n  Carries customer prefixes Carries Internet prefixes EGPs are independent of ISP network topology 69 . Hierarchy of Routing Protocols Other ISPs BGP4 BGP4 and OSPF/ISIS BGP4 IXP Static/BGP4 Customers 70 . alongside IP n  71 .Routing Protocols: Choosing an IGP p  Review n  the “OSPF vs ISIS” presentation: OSPF and ISIS have very similar properties p  ISP usually chooses between OSPF and ISIS Choose which is appropriate for your operators’ experience n  In most vendor releases. both OSPF and ISIS have sufficient “nerd knobs” to tweak the IGP’s behaviour n  OSPF runs on IP n  ISIS runs on infrastructure. and network addresses of any LANs having an IGP running on them Strongly recommended to use inter-router authentication Use inter-area summarisation if possible 72 .Routing Protocols: IGP Recommendations p  Keep the IGP routing table as small as possible n  p  If you can count the routers and the point to point links in the backbone. that total is the number of IGP entries you should see IGP details: n  n  n  Should only have router loopbacks. backbone WAN point-to-point link addresses. consider: Using “ip unnumbered” on customer point-topoint links – saves carrying that /30 in IGP p  (If customer point-to-point /30 is required for monitoring purposes. then put this in iBGP) Use contiguous addresses for backbone WAN links in each area – then summarise into backbone area n  Don’t summarise router loopback addresses – as iBGP needs those (for next-hop) n  Use iBGP for carrying anything which does not contribute to the IGP Routing process n  73 .Routing Protocols: More IGP recommendations p  To n  fine tune IGP table size more. Routing Protocols: iBGP Recommendations p  iBGP should carry everything which doesn’t contribute to the IGP routing process Internet routing table n  Customer assigned addresses n  Customer point-to-point links n  Dial network pools. etc n  74 . passive LANs. Routing Protocols: More iBGP Recommendations p  Scalable iBGP features: Use neighbour authentication n  Use peer-groups to speed update process and for configuration efficiency n  Use communities for ease of filtering n  Use route-reflector hierarchy n  p  Route reflector pair per PoP (overlaid clusters) 75 . Security 76 . Security p  p  p  p  ISP Infrastructure security ISP Network security Security is not optional! ISPs need to: n  n  n  p  Protect themselves Help protect their customers from the Internet Protect the Internet from their customers The following slides are general recommendations n  Do more research on security before deploying any network 77 . no external access n  See IOS Essentials for the recommended practices for ISPs n  78 . TACACS+ n  Disable telnet on vtys. vty filters.ISP Infrastructure Security p  Router security Usernames. passwords. only use SSH n  vty filters should only allow NOC access. denialinfo.com n  p  Effective filtering Network borders – see Cisco ISP Essentials n  Static customer connections – unicast RPF on ALL of them n  Network operation centre n  ISP corporate network – behind firewall n  79 .ISP Network Security p  Denial of Service Attacks eg: “smurfing” n  see http://www. Ingress & Egress Route Filtering Your customers should not be sending any IP packets out to the Internet with a source address other then the address you have allocated to them! 80 . Out of Band Management 81 . Out of Band Management p  Not optional! p  Allows access to network equipment in times of failure p  Ensures quality of service to customers Minimises downtime n  Minimises repair time n  Eases diagnostics and debugging n  82 . Out of Band Management p  OoB Example – Access server: modem attached to allow NOC dial in n  console ports of all network equipment connected to serial ports n  LAN and/or WAN link connects to network core. or via separate management link to NOC n  p  Full remote control access under all circumstances 83 . switch and ISP server consoles (Optional) Out of band WAN link to other PoPs Modem – access to PSTN for out of band dialin Ethernet 84 to the NOC .Out of Band Network Equipment Rack Equipment Rack Router. Out of Band Management p  OoB Example – Statistics gathering: Routers are NetFlow and syslog enabled n  Management data is congestion/failure sensitive n  Ensures management data integrity in case of failure n  p  Full remote information under all circumstances 85 . Test Laboratory 86 . Test Laboratory p  Designed n  to look like a typical PoP Operated like a typical PoP p  Used to trial new services or new software under realistic conditions p  Allows discovery and fixing of potential problems before they are introduced to the network 87 . Test Laboratory p  Some ISPs dedicate equipment to the lab p  Other ISPs “purchase ahead” so that today’s lab equipment becomes tomorrow’s PoP equipment p  Other ISPs use lab equipment for “hot spares” in the event of hardware failure 88 . software or services on the live network n  p  Every major ISP in the US and Europe has a test lab n  It’s a serious consideration 89 .Test Laboratory p  Can’t afford a test lab? Set aside one spare router and server to trial new services n  Never ever try out new hardware. Operational Considerations 90 . Operational Considerations Why design the world’s best network when you have not thought about what operational good practices should be implemented? 91 . no matter how trivial the modification may seem n  Establish maintenance periods which your customers are aware of p  e.g.Operational Considerations Maintenance p  Never work on the live network. Tuesday 4-7am. Thursday 4-7am p  Never n  Unless you want to work all weekend cleaning up p  Never n  do maintenance on a Friday do maintenance on a Monday Unless you want to work all weekend preparing 92 . Operational Considerations Support p  Differentiate between customer support and the Network Operations Centre Customer support fixes customer problems n  NOC deals with and fixes backbone and Internet related problems n  p  Network Engineering team is last resort They design the next generation network. implement new services. improve the routing design. etc n  They do not and should not be doing support! n  93 . Operational Considerations NOC Communications p  NOC should know contact details for equivalent NOCs in upstream providers and peers p  Or consider joining the INOC-DBA system Voice over IP phone system using SIP n  Runs over the Internet n  www.pch.net/inoc-dba for more information n  94 ISP Network Design Summary 95 ISP Design Summary p  KEEP IT SIMPLE & STUPID ! (KISS) p  Simple is elegant is scalable p  Use Redundancy, Security, and Technology to make life easier for yourself p  Above all, ensure quality of service for your customers 96 ISP Network Design ISP Training Workshops 97 .
Copyright © 2024 DOKUMEN.SITE Inc.