VoIP Pilot Status · • 20Gb/s Internet/Backbone Connectivity With A Backup 20Gb/s Path • 2 x...

64
VoIP Pilot Status

Transcript of VoIP Pilot Status · • 20Gb/s Internet/Backbone Connectivity With A Backup 20Gb/s Path • 2 x...

  • VoIP Pilot Status

  • Why VoIP; Why now?

    • Centrex: Need to find a replacement• Cost: Reduce phone bills• Communication: Provide better tools

  • Pilot – Technical Details

    • Platform: Microsoft’s Skype for Business– Office Live Communications Server -> Office Communications Server -

    > Lync -> Skype for Business – Standard Dell servers running Windows Server 2012r2 (physical),

    primary and DR sites; VMs for test– Key protocols:

    • Signaling: SIP• Media: RTP, SRTP• Audio Codecs: SILK, OPUS, G722, Siren, G711, RED (and many more)• Video Codecs: H.264/SVC

  • Pilot – Features and Users• Deployed Features

    – PSTN connectivity– Simple phone experience available (not truly POTS)– P2P and Conference: Audio/Video, Screen share,

    Whiteboarding, File Sharing, Polling, and Q&A (UC)– O365 integration and Hybrid– Voicemail (O365 and AVST)– Call Center (Help Desk)

    • Users: DoIT, School of Business, Laboratory Animal Research, College of Agricultural Life Sciences, College of Engineering

  • Pilot - Key Findings• Many suspicions confirmed

    – Complexity– It’s an adjustment for users– Huge potential

  • Additional Technical Variables• Infrastructure

    – Networking• DNS, DHCP, Firewalls, QoS, SIP Trunking, hardware, session border controller

    – Server Administration• OS patching, hardware, certificates, monitoring

    – Application Administration• User administration/policies, VoIP application patching, VoIP application configuration

    • Endpoints– More types of devices

    • Desk phones, desktops, tablets, laptops, smartphones, headsets

    – Different types of issues• Application updates/issues, OS updates/issues, drivers

    • E911– Mobility makes this very difficult to get to the room level

    • Analog Devices– Blue phones, elevator phones, fax

  • It’s required adjustments• Mobility

    – Users can call from their work number at home, in coffee shops – and from many more devices

    • Delegation to users – Users have the ability to modify their own workflows

    • Voicemail, call forwarding, call delegation, conference calls/permissions

    • Expectations– Dial-tone is simple, but added features of UC adds complexity– Call quality varies; users are accustomed the same quality for all calls– Unified Communications is much more than POTS meaning the

    definition of ‘working’ is more complicated

  • It shows huge potential• VoIP by itself brings additional mobility, flexibility, and quality

    – A phone can go anywhere– Users are able to find a device that suits them– Wideband codecs make all the difference

    • Unified Communications has fundamentally changed how many people communicate– Video session, chat, call, screen share require the same effort

    • Choose the right mode of communication at any time• Easier communication means more of it

    – Presence is complicated – in a good way• Actually gives you a good indication if someone is available or not• Becomes a core part of people’s workflows• Helps avoid communications that shouldn’t happen

  • Common Scenario

  • Common Scenario (continued)

  • Common Scenario (continued)

  • Common Scenario (continued)

  • Common Scenario (continued)

  • Additional Observations• It came naturally to many people

    – The escalation just makes sense– Our platform helped (Outlook integration, Skype consumer)– Our pilot population helped (lots of DoIT folks)

    • Desk Phone adoption low– Less than 10% of our pilot population are using desk phones– Many people returned them for headsets

    • Some people care about UC, some people just want a phone– Many users have built the UC features into their every day workflows

    • Video Interviews (video conference calls), Teaching (screen sharing), User Support (screen share + shared control), impromptu meetings (ad hoc meetings)

    – Some people just want the phone and don’t care• Desk phone with voicemail

    • New Mac client sorely needed

  • Status and Future

    • Our platform is being used by ~1000 users– ~23,000 audio calls/month– ~5000 IM sessions/month (~30,000 total IMs sent)

    • Currently evaluating additional solutions to determine best fit overall– Demos: AT&T, Microsoft, Cisco

    • Stay tuned for updates…

  • UW Campus Networking

  • Current “XXI Century” Network

    • Built in 2004• Consists of:

    • 4 Border/Core Routers In Separate Buildings• 10 Routers In Supernodes and Nodes• Backup Power In Each Location – UPS With A 4 Hour Runtime• All Routers Are Setup For Active/Standby Failover• A Pair Of Firewalls In Each Location For Active/Standby Failover• 10Gb/s Internet Connectivity with a backup path• 10Gb/s Inter-Backbone Connectivity With Backup 10Gb/s• 1Gb/s Building To Core Connectivity With Backup 1Gb/s• Building Networks Have Redundant Switches With Backup Links

  • Current XXI Century Network- Failure

    Primary Backup Primary Backup

    Border – Pri. Border-Bkp

    Core - BkpCore - Pri

    1/3 of campus is

    down

  • New – “NextGen” Network Goals

    • Better Network Redundancy• Consolidate Equipment Where Possible• Backup Power - Longer Runtimes• Higher Speed Core• Better Utilize Backup Links• Faster Failovers• Ability To Connect Buildings At Speeds Greater Than 1Gb/s

  • NextGen Network

    • Consists of:• Additional Backbone Fiber• 2 Core/Border Routers In Separate Buildings (Consolidated from 4 to 2)• 3 Pairs Of Routers In 3 Buildings (Pairs Have Been Split Between 2 Buildings)• 4 Hour Backup Power In Each Location – DC Power Plant• Generator Backup In Each Location• All Routers Are Setup For Active/Active Failover• A Pair Of Firewalls In Each Location For Active/Standby Failover• 100Gb/s Internet Connectivity With A 20Gb/s Backup Path• 10-100Gb/s Inter-Backbone Connectivity With Backup 10-100Gb/s• 1-10Gb/s Building To Core Connectivity With Backup 1-10Gb/s• Building Networks Have Redundant Switches With Active Redundant Links

  • Node –Pri. Node -Bkp

    NextGen Network – Failures

    Node –Pri. Node -Bkp

    millisecond reconvergence1/3 of campus

    stays up

    Core -BkpCore – Pri.

    1-2 SecondFailover

  • What’s been migrated?

  • What’s left to migrate?

  • Datacenters

  • Current Datacenter Network

    • Built in 2004• Consists of:

    • 2 Node Routers In Separate Buildings• 2 Aggregation Switches In Separate Buildings• 21 Switches For Server Connectivity• Backup Power In Each Location• All Routers Are Setup For Active/Standby Failover• 2 Pairs Of Firewalls In Each Location For Active/Standby Failover – One for Dev. and Prod.• 2 Pairs Of Load Balancers In Each Location For Active/Standby Failover – One for Dev. and Prod.• 10Gb/s Internet Connectivity with a backup path• 10Gb/s Connectivity With Backup 10Gb/s For Primary Switches• 1Gb/s Connectivity With Backup 1Gb/s for Secondary Switches

  • Current Datacenter Network – Failures

    Node –Pri.

    20 - 40 SecondNetwork

    Reconvergence

    Core - BkpCore - Pri

    Aggregation Switch

    Load Balancer - Pri

    Access Switch - Pri

    Access Switch - Bkp

    Node –Pri.

    Aggregation Switch

    Load Balancer - Bkp

    Access Switch - Pri

    Access Switch - Bkp

    BecomesActive

    Network Fails Over

  • New - Distributed Datacenter Network(DDN) Goals

    • Better Network Redundancy• Higher Speed Core• Better Utilize Backup Links• Faster Failovers• Have The Ability To Add Existing Datacenters On Campus To It - Providing Layer2 Connectivity• Provide A Path To Centralize Datacenters On Campus• Provide 100Mb/s, 1Gb/s, 10Gb/s And Some 40Gb/s Server Interfaces In A Single Switch

  • Distributed Datacenter Network

    • Consists of:• 2 Node Routers In Separate Buildings• 2 Aggregation Switches In Each Datacenter• 45+ Switches For Server Connectivity• Routers Are Setup For Active/Standby Failover• 2 Pairs Of Firewalls In Each Location For Active/Standby Failover – One For Dev. And Prod.• 2 Pairs Of Load Balancers In Each Location For Active/Standby Failover – One For Dev. And Prod.• 20Gb/s Internet/Backbone Connectivity With A Backup 20Gb/s Path• 2 x 40Gb/s Connectivity – Datacenter To DDN Core (Utilizes Both 40Gb/s Links While Providing Fast

    Failover)• Server Racks Have 100Mb/s, 1Gb/s, 10Gb/s And Some 40Gb/s Switch Interfaces For Server Connectivity

  • Distributed Datacenter Network – Failures

    Core - BkpCore - Pri

    Aggregation Switch

    Load Balancer - Pri

    Access Switch - Pri

    Access Switch - Bkp

    Spine 1 Spine 2

    Aggregation Switch

    Aggregation Switch

    Access Switch - Pri

    Access Switch - Bkp

    Aggregation Switch

    Load Balancer - Bkp

    millisecond reconvergence

    millisecond reconvergence

    0-40 secondfailover

    1-2 second failover

  • Current State of DDN

    • Core & Aggregation Has Been Installed• A majority of Server Switches Have Been Installed• Existing Firewalls Have Been Migrated• Load Balancers Have Not Been Migrated Yet (New Chassis)• Routing Needs To Move

    • Servers Migrated To DDN (as of 5/23/2016):• Computer Sciences = 305• Medical Foundation Office Building = 171• Warf = 176

    • Servers Connections Needing To Be Moved Yet (as of 5/23/2016)::• Computer Sciences = 936

  • Load Balancing

  • Current Load Balancers• 6 x Citrix MPX17000s (2 = Dev/Test, 2 = Production, 2 = Techlab)

    • Active/Standby configuration, physically in two different buildings• Running in transparent mode (Layer2 Bridging)• 18Gb/s Max Bandwidth• SSL Offload• Compression• Session Reuse• Some Caching• Automatic Outage Pages• HTTP Rewrite/Responders•

  • New Load Balancers• MPX17000 - EoS• 3 x Citrix SDX14020s (2 = Dev/Test/Production, 1 = Techlab)

    • Configuration - In Progress• Active/Standby configuration, physically in two different buildings• Running in transparent & route mode• 20Gb/s Max Bandwidth Today, Scalable up to 100Gb/s (License)• Up to 25 Virtual Load Balancers• SSL Offload• Compression• Session Reuse• Some Caching• Automatic Outage Pages• HTTP Rewrite/Responders

  • Firewalls

  • Current Firewalls• UW Madison firewalls today are Cisco ASA5585-60 layer4 firewalls

    • Active/Standby configuration, physically in two different buildings• Running in transparent mode (Layer2 Bridging)• 20Gb/s Max Bandwidth• Layer4 Firewalling• 16 x ASA5585-60s

    • Supporting 441 Virtual Firewalls

  • Current Firewall Bandwidth• 3 Month Bandwidth Usage:

    • Computer Sciences• Avg. In/Out = 2.2Gb/s, Peak= 4.9Gb/s• Avg. In/Out = 3.4Gb/s, Peak= 7.2Gb/s

    • 432 N. Murray:• Avg. In/Out = 3.3Gb/s, Peak= 7.0Gb/s

    • Animal Sciences:• Avg. In/Out = 2.2Gb/s, Peak= 4.9Gb/s

    • CCI Datacenter:• Avg. In/Out = 6.3Gb/s, Peak= 15.1Gb/s• Avg. In/Out = 3.0Gb/s, Peak= 7.5Gb/s

  • NextGen Firewalls

    • Campus Security is pursuing a Next Generation Firewall• Active/Standby Configuration, Placed In Two Different Buildings• Layer7 Firewall With Full Threat Prevention, VPN, SSL Decrypt• Potential For AD Integration• Multiple mode support (L1, L2, L3, TAP)• 225 Virtual firewalls supported per pair of chassis• 60Gb/s Layer7 Bandwidth today, scalable up to 200Gb/s • 30Gb/s Threat bandwidth today, scalable up to 100Gb/s

  • NextGen Firewall Plans

    • Not enough funds to do all of campus today:• Computer Science Supernode = Will continue to run on Cisco Chassis• 432 N. Murray Supernode = Will continue to run on Cisco Chassis• Animal Sciences Supernode (includes Wireless) = Pair of NextGen Firewalls• CCI Datacenter = Pair of NextGen Firewalls

  • Wireless

  • NextGen Wireless Network

    • NextGen Wireless Network Core– 4 pairs of wireless controllers configured as active/standby pairs located in different buildings.– 2 layer-2 switches located in separate buildings interconnect all wireless core devices. These switches provide active/active connections for all core devices including NextGen firewalls.– Redundant connections to core campus routers.– Backup power at both sites.– Authentication is load balanced across a cluster of 9 servers.– Redundant 10Gb/s to each core device.– 4106 active access points connected at either 100Mb/s or 1Gb/s.– Peaked usage: 54,900 concurrent devices and 5.4Gb/s.

  • AANTS Changes

    - AANTS User Interface Changes- WiscNIC DataBase Changes- WiscNIC Group Functionality

  • AANTS Changes

    • AANTS User Interface Changes: Clarify use of room, jack and description fields.

    • WiscNIC Database: Cleaner authentication, Groups, easier to understand and support.

    • DB Cleanup: We've been cleaning up the database, and this builds on those efforts.

  • AANTS UI Changes: Why?

    • Public Safety: We need to deliver VoIP user location information to Enhanced 911 Public Service Access Points [PSAPs].

    • Cleaner UI: Current Field Descriptions are ambiguous and mostly have meaning to technical staff.

    • DB Cleanup: We've been cleaning up the database, and this builds on those efforts.

  • The Red portion was taken from the existing ShowSwitch.cgiThe Blue portion was edited to show various options, which we will explain.

  • Beneath the Port Description is bold Room data, also uploaded by Field Services. If you have user entered Patch/Jack data (in blue), AANTS will not print out either piece of the bold Field Services uploaded data for that port. This is why Gi1/0/2 has no bold data, and Gi1/0/4 does. We are pushing that room data into a new Room field (in green) and exposing it for user edits.

  • The Patch/Jack data comes from two sources. The bold data (in red) is uploaded by Field Services. The user entered data (in blue) comes from AANTS web form users. If you have user entered Patch/Jack data for a port, AANTS won't print the bold information for that port. We're coalescing these into a JackID field (under Patch/Jack and shown in green) and adding a history.

  • WiscNIC Changes

    • Auth Bits: Enable/Disable Access with a bit flip, simplifying account maintenance.

    • Remove Auth Info from Subnets: All Auth functions are by VLAN, not subnet, cleaning up the interface between IPAM and WiscNIC.

    • Adding Group Management. The largest change.

  • Group Management

    • The User Group: Every user belongs to their personal group, and zero or more groups.

    • Auth by Groups. Admins can add and remove tech-c contacts to a group and be done.

    • Remove Auth Info from Subnets: All Auth functions are by VLAN, not subnet, cleaning up the interface between IPAM and WiscNIC.

  • Topics

    • DNS/DHCP/IPAM Project• MUFN• BOREAS-Net upgrade

  • DNS/DHCP/IPAM

  • Business Needs & Project Goals• DoIT seeks to replace and enhance our open source implementations

    of DHCP and authoritative DNS software to a commercially supported software implementation in order to:

    • Streamline DNS, DHCP and potentially IPAM application maintenance activities;

    • Augment DNS/DHCP Infrastructure to allow new services to be developed (eg: delegated DNS admin)

    • Obtain more reliable DNS/DHCP infrastructure in support of increasing life-safety (e.g. voice over IP) and other mission-critical network applications;

    • Implement DNSSEC automation to improve DNS security; • Develop capability to implement future DNS advancements into production

    more quickly; • Acquire DNS start of authority (SOA) redundancy to support campus

    continuity of operations activities and communication systems (e.g. www.wisc.edu, alerts.wisc.edu).

  • Vendor Comparison• Evaluated BlueCat and InfoBlox

    • InfoBlox exceled in:• Data import• Active/Standby SOA failover• Scheduled changes• Native UI• Troubleshooting tools

    • Both presented: API DNS classless delegation [RFC 2317] DNS TSIG DNSSEC and DHCP protocol failover automation

  • MUFN

  • MUFN (Metropolitan Unified Fiber Network)

    • City of Madison Monona Terrace convention center• Finalist to host 1999 Vice-President Debate• Lost due to “lack of adequate broadband access”

    • Developed “interested parties” roundtable to address affordable broadband access issues

    • Hosted 2002 National Mayors Conference with fiber

  • (significant) MUFN milestones

    • 2000 – 2006: roundtable / identified mutual interest projects • 2007: formally developed “pin map” • 2008: network engineering / BTOP grant submission• 2009: won BTOP grant• 2010 – 2012: built network / bootstrapped operations• 2013: formed MUFN-Consortium, Unincorporated Association• 2014: received 501(c)4 designation from IRS

  • MUFN 2016

    PresenterPresentation NotesApproximately 150 miles of conduit and fiber285 connections

  • MUFN Active Optical Infrastructure

    UWHCData Center

    Fire Station 11

    East PD

    SMF20 km

    SMF20 km

    SMF30 km

    432 N Murray

    SMF20 km

    1210 W Dayton

    SMF6 km

    Olin

    CCBSMF6.1 km

    SMF17 km

    MATCTruax

    SMF6 km

    Fire Station 10

    SMF13 km

    MetroInnovation

    222 W WashSMF

    3 km

    SMF3 km

    Police Training

    SMF9 km

    SMF21 km

    PresenterPresentation Notes* $1.5MM system, 12 nodes* ~10 of 40 waves (light colors) in service on “figure 8” rings; each color is 10Gbps* Provides several institutions with redundant connectivity & some with their primary and redundant service

  • Current MUFN members (22)

    • City of Madison• City of Middleton• City of Monona• Dane County• DaneNet• Department of Public Instruction• Madison College• Madison Metropolitan School

    District• Middleton-Cross Plains Area School

    District• Monona Grove School District• South Central Library System

    • State Lab of Hygiene• SupraNet• UnityPoint-Meriter (hospital)• UW-Extension Geological and

    Natural History Survey• UW-Hospital and Clinics• UW-Madison DoIT• UW-Madison School of Medicine

    and Public Health• UW Medical Foundation• WiscNet• WI Dept of Military Affairs• WI Independent Network (WIN)

  • MUFN “transport” services• Operations

    – 24x7 Network Mgnt Center (NMC), engineering, IT services, fiber database, cable locates

    • Governance– Oversight Committee (Board)– Technical team (meet bi-weekly)

    • Fiscal agent– A/P, A/R, hold funds, prepare financial statements, independent

    audits (IRS requirement >$400k concern)• Legal services – Dewitt Ross and Stevens• Insurance

    – Liability, property, directors & officers

  • Current/future MUFN activities• Organic network/member growth• Cellular improvements – Verizon cellular fiber backhaul for microcells• Business & multi-tenant building broadband• Low-income broadband program• Exploring FTTH in City of Madison• First responder radio system across metro area• Centralized traffic controller system• Cameras to deter/catch criminal activity• Wi-rover, paradrop, IoT research

  • BOREAS-Net Upgrade

  • MUFN / Off Campus Connectivity Rates

    VoIP Pilot StatusWhy VoIP; Why now?Pilot – Technical DetailsPilot – Features and UsersSlide Number 5Pilot - Key FindingsAdditional Technical VariablesIt’s required adjustmentsIt shows huge potentialCommon ScenarioCommon Scenario (continued)Common Scenario (continued)Common Scenario (continued)Common Scenario (continued)Additional ObservationsStatus and FutureUW Campus NetworkingCurrent “XXI Century” NetworkCurrent XXI Century Network- FailureNew – “NextGen” Network GoalsNextGen NetworkNextGen Network – FailuresWhat’s been migrated?What’s left to migrate?DatacentersCurrent Datacenter NetworkCurrent Datacenter Network – FailuresNew - Distributed Datacenter Network(DDN) GoalsDistributed Datacenter NetworkDistributed Datacenter Network – FailuresCurrent State of DDNLoad BalancingCurrent Load BalancersNew Load BalancersFirewallsCurrent FirewallsCurrent Firewall BandwidthNextGen FirewallsNextGen Firewall PlansSlide Number 40WirelessNextGen Wireless NetworkAANTS ChangesAANTS ChangesAANTS UI Changes: Why?The Red portion was taken from the existing ShowSwitch.cgi�The Blue portion was edited to show various options, which we will explain.Beneath the Port Description is bold Room data, also uploaded by Field Services. If you have user entered Patch/Jack data (in blue), AANTS will not print out either piece of the bold Field Services uploaded data for that port. This is why Gi1/0/2 has no bold data, and Gi1/0/4 does. We are pushing that room data into a new Room field (in green) and exposing it for user edits.The Patch/Jack data comes from two sources. The bold data (in red) is uploaded by Field Services. The user entered data (in blue) comes from AANTS web form users. If you have user entered Patch/Jack data for a port, AANTS won't print the bold information for that port. We're coalescing these into a JackID field (under Patch/Jack and shown in green) and adding a history.WiscNIC ChangesGroup ManagementTopicsDNS/DHCP/IPAMBusiness Needs & Project GoalsVendor ComparisonMUFNMUFN (Metropolitan Unified Fiber Network)(significant) MUFN milestonesSlide Number 58MUFN Active Optical InfrastructureCurrent MUFN members (22)MUFN “transport” servicesCurrent/future MUFN activitiesBOREAS-Net UpgradeSlide Number 64