Alright, time for some updates.
We have installation-teams driving around all of Sweden and most of the core-network is starting to be built out, we had a few good weeks with good pace so at this very moment we have installed 82% of the DWDM-nodes and 64% of the core-routers. In the same time we roll out DWDM and Routers to core-locations we also install and commision the 4G Out-of-Band console servers so we can reach and start to configure all equipment.
Both ROADMs and routers is shipped without any configuration at all and we do not require the field-technicians that install the equipment to do anything else than to connect the console-server to the 4G-antennas (they are preconfigured as mentioned in a earlier post) and we should be able to access all equipment at the sites. This means we can commision links, wavelengths and optical spans before we have light on the fiber.
There is still work going on throughout the network to fix and repair all reported problems with the dark fiber we have encountered and to not push the time-schedule further we install all equipment and connect the fiber nevertheless if the fiber-span is okay or not, and hopefully when the fiber is up to standard we can go ahead and light spans immediately.
We have been running a few successful tests on the completed optical-spans and so far we have not encountered any problems to light up long (or short) links as long as the amplifiers is performing as expected and that the fiber is decent, the longest working linke we have today with the Juniper 100G Coherent interfaces is Uppsala to Luleå, that is about 900km of fibre. Hopefully we can get enough links up and running so we can test Malmö-Kirtuna in a not so distant future (2000km fibre).
The idea to get the maximum amount of performance out from a RAMAN-amplifier is a project that will continue long in 2017 and maybe even in 2018 as well. The main reason we use these high grade hybrid RAMAN-amplifiers is not to be able to light up the current network as it is, but to prepare for the future and make sure we don’t need to truckroll and change all amplifiers when and if we upgrade to the next technology, which may be 400G, or it may even be 1000G.
During may and all during summer our sub-contractor for fiber will start to measure and prepare all access-fibers. The fiber going from the core/POP site out to the customer in the other end. The requirements for these fibers is not as harsh as the long-distance links where we more or less expect them to be perfect, but for access-fibre perfect will not exist. There is local broadband and power companys in almost all cities we are active in and the quality of installations and what you can do is very different. Most of them is also in a state of monopoly and does not really need to deliver something out of the ordinary. What we have is that we require one of the access-fibres to be within specifications that a regular 100g-LR4 can work on it, the other access-fibre can be worse since we will run the Coherent optics on top of that and have much more room for high attenuation and reflection.
Alright, enough of that. Let’s talk some logical design.
This is a very simple overview on the physical design of university-access. There is two Juniper MX480 located on a diverse location at campus, these are tied together with fiber in between consisting of Nx10G. The uplinks is one 100G LR4 locally in the city, and a 100G Coherent to the neighbouring city. And then the core-router in the city is also tied together to two neighbouring cities using 100G Coherent interfaces.
This is a more block-diagram how it will look on how it will be connected if its a Standard type of handoff. Click it for the full resolution to see how the idea is to connect things. The 8PSM ”filter” is a passive splitter/combiner and is gridless.
Time to look at logical things…
So this is the simplest way of doing a handoff to the university. A classical ”ISP-handoff” with EBGP. Both Juniper MX480 is SUNET-controlled and runs as part of the core-network. ISIS, MPLS and IBGP is terminated in MX480 and EBGP is used down to the customers AS-number and we can preferably send a full table down to the university to take advantage of all diverse paths. Day 1 we have allocated 6x10G interfaces for downstream purposes which is free for the university to use as they like, either to different equipment or to build a Nx10G LAG. SUNET also has the possibility to deliver any type of services through auxiliary interface such as MPLS-tunnels, interconnect-interfaces, other type of customers that connects through campus (dormitories, museums, science projects, etc), on the diagram shown as ”other stuff”.
This requires the university to a have a device that can speak BGP accordingly, and most likely that it can also do a compatible LAG/LACP to the Juniper devices to aggregate the links.
The positive things with this solution is that it is very simple, for both the customer and the operator. Minimal configurations and clear boundaries. The negative thing is that this requires the university to run and take care off BGP themselves in their own equipment. Which sets quite high demands on customers equipment.
This is when things get interesting, what if the the SUNET could co-exist with the Campus network in the same equipment? seeing as the investment is already done for the Juniper-router.
What we have done for many years in SUNET and continue with in the new network is that we run logical systems on all of our routers. This cuts the physical router into logical systems (virtualization), and you assign resources to the logical system as you like. In the old network we provided a logical system (guest) to the university and had SUNET in the main-instance (the host), this was mostly to be able to provide a BGP-to-OSPF-translator seeing as no university was interested back then to house their own BGP-capable devices.
What we will do in the new network is to flip the steak and put SUNET in a logical system, and put the campus in the main-instance. This is because there is a few features thats not supported on a logical system, such as collecting netflow and using mlag. SUNET can and will collect netflow in the core-routers instead and the university could then extract netflow from their own MX-routers instead. The main-instance and the SUNET logical system will be using different VLANS on the 100G-ports up to core. The university will peer with EBGP with the core-routers and not the LSYS on the same box, this is mainly because the logical-tunnel interfaces used to connect logical systems is limited to half the ASIC-speed. The LSYS which will house the SUNET-instance will terminate MPLS, IBGP and ISIS (which is the IGP of choice for SUNET.) This logical system will be used to manage the box itself but also to be able to terminate non-university customers at the campus-sites and the ability to provide virtual services to them one needing them.
The positive thing with this type of setup is that we leverage the very potent routing and protocol-functionality from the MX from both the ISP-side and down to the campus-side as well. There is plenty of resources for everyone to do whats needed and with the broad featureset of a service-router type of equipment the university can enjoy technologies which is probably not available when using a more classic campus-core type of device. For the ones that decides to utilize the SUNET MX-router as their L3-Core will instantly get the benefit of less devices in the network and no need to re-invest in core-equipment for the foreseeable future.
Logical system will of course be provided free-of-charge since it does not cost SUNET anything to produce (no licensing-fees or such).
Juniper has recently changed vital infrastructure on how the RE is built out, some interesting tidbits while we have (tried to) upgrade releases and get things to work.