Don't wanna be here? Send us removal request.
Text
White Paper: CWDM + DWDM = Increased Capacity
One way of increasing capacity in fiber optic links is to add DWDM over existing CWDM
April 2023
by Robert Isaac
Ghostwritten by Scott Mortenson
For years, service providers have been using Coarse Wavelength Division Multiplexing (CWDM) to increase the capacity of fiber optic links. CWDM filters offer up to 18 ITU (International Telecommunication Union) defined wavelengths and has been an ideal way to transport 1Gbps and 10Gbps circuits over a single fiber span.
What we are seeing now seems to be an uphill climb for CWDM applications. There appears to be a bandwidth growth requirement, and decreased support for CWDM from some equipment manufacturers.
With CWDM support from manufacturers dwindling and the need for capacity increasing at an exponential rate, the question becomes “How do we increase the capacity without forklifting the existing CWDM?”
One answer can be using DWDM over the existing CWDM.

Figure 1
The Concept
Because CWDM is built with channels that are spaced 20nm apart and often have a 10-13nm passband per wavelength (see Figure 1 above), DWDM makes a lot of sense. DWDM filters are built with much smaller channel spacing (.4nm/.8nm/1.6nm), so these wavelengths can be combined and will pass through the ~13nm passband of CWDM channel. For this example, we will focus on standard DWDM filter channels that are in the C-Band (1525nm-1565nm) spectrum and 100Ghz-spaced as this is the most common and supported DWDM application.
If it is warranted this same principle can be applied using DWDM channels in the L-Band (1570nm-1610nm) as well as using channels that are only 50Ghz-spaced to increase channel count and density, and be easily supported with tunable SFP+ optics.

Figure 2
Figure 2 shows how cascading DWDM filters over an existing CWDM span would connect. In this example we use a standard, off the shelf, DWDM filter that is equipped with 8 channels (ITU Ch 52-59).

Figure 3

Figure 4
Figures 3 and 4 show how 20 DWDM channels could be added across the 1530nm CWDM port and 30 DWDM channels can be added using the 1550nm CWDM port using C-band channels. We could apply this same philosophy to the 1570nm, 1590, and 1610nm ports as well but would require L-Band DWDM channels which aren’t widely supported today.
The Challenge
Now that we know a standard 8 Channel CWDM can be expanded to include another 50 channels you may be thinking “What are the potential downsides to using DWDM over CWDM?” and that would be a very good question to ask.
This concept has been available for many years and hasn’t become part of the mainstream deployment strategy for many network operators. Why not? The only limitation to using this concept from a performance standpoint is the added insertion loss of having both the CWDM and DWDM filters between the transceivers.

Figure 5
Figure 5 shows the logical end-to-end for 8 channels of DWDM over an existing CWDM connecting two sites 30km apart. To keep losses lower, we will limit the new channels being added to 8 DWDM channels. Understanding that 10G DWDM optics have an overall power budget of 23db, we can see that adding the DWDM filters brings the overall link loss to 21.5db which falls just inside the power budget.
Because DWDM optics are built for longer reaches with higher power budgets — and CWDM is often used on shorter fiber spans, say under 30km — the insertion loss should be a non-issue. And if the loss is an issue, DWDM channels can be amplified (unlike CWDM), so placing a low-cost EDFA between the CWDM and DWDM filters could help extend the reach well beyond even 30km.
Reluctance to this concept also seems to come from not fully understanding the simplicity of passive WDM or even how to manage the engineering, installation, records, and inventory for having both technologies within the same span. If those challenges can be overcome, overlaying DWDM onto your existing CWDM can be a very efficient and cost-effective way to respond to the exponential need for bandwidth we are facing in today’s technology.
For network operators and service providers who have made a significant investment in CWDM and facing the need for bandwidth growth, this concept should be considered. Passive DWDM filters can be deployed quickly without impacting existing traffic, are a very low-cost alternative to complex active systems, and can equip your network for the future in very short order. Add the operational efficiency of 10G tunable DWDM optics, and this could be a home run for your network.
Demystifying DWDM for the DCI
If it is so easy and inexpensive, why aren’t all the data centers defaulting to using this on every fiber end? Well, that’s where things get a little tricky.
Whenever you say “DWDM” to a Data Networking person (and even some Service Provider engineers), their default reaction tends to go straight for large, complex, and expensive DWDM systems. Like Reconfigurable Optical Add Drop Multiplexing (ROADM) arrangements that are completely automated and perform optical switching, sub-signal aggregation, and even some L2 functions.
The truth is, DWDM is simply the combination and separation of circuits by wavelength — and only a small part of those larger systems. It is the basic technology that allows users to put 40+ distinct circuits on a given fiber, then separate them at the far end to connect to the individual switch ports.
As stated in the previously, this is often done passively, requiring no electrical power, software, annual maintenance agreement, etc. — and at a fraction of the cost of those more complex active systems.
So again, I ask: “Why aren’t more data center interconnects using this technology?”
Well, DWDM system design — or transport engineering — is usually not taught during Data Networking education courses. DWDM or transport are often thought of as completing ways of architecting a network, which means there are usually two camps: You are either a Data Network Engineer, or a Transport Engineer. Either way, one typically needs the other at some point in their network.
This is not to say you don’t need complex, software-controlled transport devices in your network. The truth is you likely do. What we are singling out here are a few applications where you can get what you need: Fiber capacity between two places quickly, inexpensively, and without sending anyone to school to get certified.
These applications can be:
• Point-to-Point Data Center Interconnects (DCI) on leased, or owned fiber. • Connections between campus facilities. • Network facilities between rooms or floors.
Using Passive DWDM can:
• Reduce or eliminate leased or new fiber builds. • Maximize the data rate per-fiber of installed fiber plants. • Drastically reduce Capex cost of high-capacity switches, complex DWDM systems, and reliance on service providers to maintain the connections. • Increase capacity of DCI connections in days not months.
How can we do this in a way we can understand?
It really comes down to Optical Link Engineering.
If you take the physical map of your network and zoom in on one span where there is a capacity bottleneck, it becomes a lot easier. For simplicity’s sake, we will focus on connecting 10G switch ports, across a single span between 2km and 50km long, making the math fairly simple.
For these locations, we just need to focus on two primary factors: Link Budget vs Link Loss, and Dispersion.
Link Budget vs. Link Loss
Every optic or transceiver has a minimum transmit power, and a minimum receiver sensitivity. By subtracting these two values, you are left with the link budget — or the total amount of power loss the signal can experience and still be legible by the receiver.
In a standard connection, you would calculate (or measure) the total loss of the fiber, patch panels, cassettes, and splices between the two optics. And if that is less than the link budget, then it should work . . . right?
Passive DWDM only adds a little more math to the Link Engineering. The optics at each end would need to be specific DWDM optics, and the filters will add more insertion loss at each end — but it is still, pretty much the same math.
For 10G DWDM optics, the link budget is typically in the 23db range. If a fiber span, with DWDM filters, has less than 23db of loss, the link should work. It’s simple math.
Or is it?
Dispersion
Another important factor we account for is Chromatic Dispersion (CD). This is a characteristic of single-mode fiber where, as a signal travels along a fiber route, it spreads out and can arrive at the far end slightly ahead or slightly behind schedule, making it difficult to be deciphered by the receiver.
The optics we are using will also establish how much dispersion it can tolerate before the signal becomes undetectable. This value is typically measured in picoseconds per kilometer per nanometer (ps/km/nm) or even simply by the optic’s distance rating. For instance, a DWDM optic-rated for 80km is often limited to 1360 ps/nm/km of dispersion. This is calculated based on traveling 80km on SMF28 type fiber with a CD rating of approximately 17 ps/nm/km.
So, there you have it. If your link falls inside the specifications defined by the optics on each end, you can deploy passive DWDM to maximize the capacity of your fiber plant, and save loads of time and money.
But what if the span exceeds the link budget or dispersion rating? No problem! The addition of Erbium Doped Fiber Amplifiers (EDFA) — to boost the signal power and/or passive Dispersion Compensation Modules (DCM) to account for excess dispersion between the DWDM filters — can help extend the reach and ensure the optics on each end perform to expectation to years to come.
Often when Transport Engineers speak to Data Network Engineers, it can seem like they are speaking different languages. That is to be expected. Specialized jargon or terminology, approaches to problems, and education can be vastly different.
If what your network truly needs is fiber capacity, lower cost of fiber infrastructure, and flexibility of lightning-fast circuit turn-up, passive and even amplified DWDM networks could be the perfect solution.
The 40-channel, two-fiber DWDM solution using 10g SFP+ optics is a great way to get 400G of capacity for links up to about 60km without the need for amplification or dispersion compensation. But what if you want higher data rates on the link? This is where things get a little tricky.
If we remove coherent optics from consideration due to the expense and complexity of deploying them, we see a pattern emerge. Here is a quick snapshot of the specifications of DWDM optics (non-coherent) we could consider:

Figure 6
That table attempts to remove a bunch of “noise” or complexity in determining if a simple point-to-point two-fiber solution will work. What we see when we review those specs is that as data rate increases, the unmodified reach and power budget both decrease.

Figure 7
Given these values, we can use the optical reach and power budget — along with the logical diagram shown in Figure 7 — to determine how long of a cross-connect we can achieve.
If we assume the 40-channel filter has a high-performance loss of 3db each, the patch panels have a loss of 0.5db each and the fiber loss is 0.25db per km (ITU-T G.657.A1 and G.652.D or better), then we can work backwards to see what the total span distance is per optic/data rate.

Figure 8
Reviewing the numbers in Figure 8, we can see that once you go beyond the 10G data rate, the unmodified reach becomes the limiting factor. For this illustration, we can expect to be able to establish some versatile, yet high-capacity, cross-connects.
We had already reviewed the capacity of a passive 40-channel system using 10Gbps optics and know that it can support 400Ghz worth of capacity. Using the same methodology, we can create links with a total line capacity of 1Tbps @ 25Gbps per channel up to ~15km, 1.6Tbps @ 40Gbps per channel up to ~8.8km, and a 4Tbps total capacity up to ~1.5km in fiber length. Knowing this can help reduce the total number cross-connects needed between any two points.
Also, worth noting: Not all channels need to be the same data rate. If the link distance is designed to work with 100Gbps links (or approximately 1.7km), that same link will be able to support 10G, 25G and 40G channels as well.
Summary
Earlier we mentioned we were “removing a lot of noise” — and then continued to make a great deal of assumptions to come up with these numbers. For instance, Forward Error Correction (FEC) is required and must be available on the host device for the links 25Gbps and higher. We made assumptions about fiber type, used calculated losses for the fiber spans, and assumed the total SFP+, SFP28, QSFP+, and QSFP28 ports were available at each end.
What this proved is that by combining passive filters and DWDM optics, we can increase the capacity as much as 40x per cross-connect pair. All this needs no power (except the switches), can be turned up very quickly, requires only 1RU of rack space (not counting the switches or patch panels), and adds zero latency.
As should be clear by now, this is not meant to be taken as gospel, and every effort should be made to know the optic specifications you are considering, the fiber type of the cross-connect, and have measured fiber loss and dispersion values before deploying.
When planned correctly, your CWDM plus DWDM can mean increased capacity without a big financial outlay. And your network can perform better as well.
0 notes
Text
Transoceanic Fiber Optics: The Cable That Runs the Tech World
by Scott Mortenson

A few years ago, an underwater cable broke, plunging a nation into Internet darkness. While some may say not having Facebook for a few hours is a positive thing, this was more than that, called “an absolute disaster . . . a national crisis.” It cut off not just social media, but email, cell phone service, financial transactions, business, and government communications.
It's still not clear what caused the breakage, but it was fixed about two weeks later.
And then, in 2022, it happened again. Only this time, it was worse.
You may recall seeing news on the Tonga volcano eruption and tsunami that left at least 6 dead, 19 injured, and others reported missing, with $90.4 million in damages. Besides bearing a devastating natural disaster, the people in Tonga lost their main connection to the outside world.
We can certainly understand why a volcanic eruption hundreds of times more powerful than the atomic bomb would take out a transpacific fiber optic cable, but it also underscores the fragility of our digital lives.
A Brief History
In a way, the whole idea of laying a cable on the bottom of the ocean floor almost sounds like science fiction. How does a cable thousands of miles long even get made? How do they get it on the boat? How do they lay it on the ocean floor?
Cable breakages on land are bad enough, with everything from construction equipment, fires, even vandalism disrupting the fiber optics. But land “interruptions” are relatively easy to fix. How do they get repaired 25,000 feet (about 7.62 kilometers) underwater?
Communication cables laid on the ocean floor were not created when the Internet came along. The first “submarine cable” was dropped into the water in 1858, but only lasted about three weeks. It was attempted again in 1866, and worked, sending telegraph signals across the Atlantic. Telephone communications (i.e., AT&T) used underwater cables beginning in 1956 and the Internet started utilizing them in 1988.
Today, some estimates say there are 550,000 miles (about 885,000 kilometers) of transoceanic fiber optic cables, while others claim 750,000. That’s like stringing a line from the moon and back twice. Whatever the actual number, it is a lot, which means “The Cloud” is not really in the sky, but 5 miles underwater.
The cable itself is just a little bit more complex than its land-locked cousin. A “standard” fiber optic cable has a core, a cladding layer, and is usually coated with acrylate polymer or polyamide. There could be more layers of protection, depending on the application. Inside all of that is the optical fiber which carries the light. A submarine cable resting on the bottom of the ocean has more layers to protect it than a land-based one, like a scuba diver wears more than just a swimming suit.
A cross section of the shore-end of a modern submarine communications cable.
1) Polyethylene, 2) Mylar tape, 3) stranded steel wires, 4) aluminum water barrier, 5) polycarbonate, 6) copper or aluminum tube, 7) petroleum jelly, 8) optical fibers
Drawing by Oona Räisänen
—
Despite all this protection, an underwater cable can be damaged not just by volcanoes and tsunamis, but earthquakes, storm currents, ship anchors, and fishing trawlers. Even sharks looking for a snack.

Politics get involved as well, with both transoceanic cables and pipelines being sabotaged or blown up over governmental agendas and spats.
How The Cables Are Made
As for the manufacturing process of undersea cables . . . yes, it is a big process. The cables are fed into a high-speed mill the size of a jet engine, put in a copper casing that may have plastic, steel or aluminum and even tar added to protect it from the elements.
You may think the end-product is fat and big, but it’s about as big as a larger-size garden hose. And the cable is designed to last 25 years if not subjected to volcanoes, earthquakes, or fishing trawlers.
The planning of where the cable is laid in the ocean takes at least a year so they can arrange its course miles underwater, where the seabed has been charted but a little unpredictable. After all, there are both trenches and mountains down there.
Implementing The Fiber Optic Cable
Alcatel Submarine Networks (a part of Nokia) has more than 700,000 kilometers (435,000 miles) of optical submarine systems deployed worldwide (enough to circumnavigate the globe 16 times). One of the ships used to deploy the cable is about 450 feet long and can have as many as 80 crew members, with teams working in two 12-hour shifts. Although they have plenty of food for their long journey (two months or more), alcohol is not allowed. That’s for the better since the seas can be rough, sometimes so bad the captain may order operations to stop, cut the cable and find safe waters. Before cutting the cable, they tie a buoy to it so they can find it when they return.
Of course, the ship does not carry enough cable to span the whole ocean, so it is done in segments. The ship can carry up to 4,000 miles (about 6437.38 km) and it can take a month for the gigantic tanks (where the cable is stored) to be fully loaded.
A special subsea plow is used to trough and bury submarine cables along the seabed closer to shorelines where things like anchoring and fishing are most prevalent and could damage submarine cables.
Out at sea, the cable is unspooled into the water, attached to several buoys temporarily so it doesn’t sink to the bottom too soon. Once out far enough, the buoys are removed and the cable sinks to the ocean floor. This is repeated over and over.
Those of you who are thinking “Wait a minute! The light in fiber optics can only go so far before it loses strength — how can a cable traveling thousands and thousands of miles keep the signal alive?”
Repeaters placed every 60-70 kilometers — about 40 miles.
The Technology of Submarine Cable
Power to the repeater is fed from power feeding equipment, which is located in Submarine Cable Station. In traditional submarine cables, every fiber pair will have their own repeaters, and four fiber pairs will have a repeater with a four-amplifier chassis. One amplifier chassis has dual laser 980nm pump units. This is called 2x2 Pump redundancy, the type of redundancy scheme used in the past.
Now they use 4x2 redundancies, improving reliability by using additional 2 pump lasers compared to the 2x2 pump redundancies. Two EDFA’s share the power from four pump lasers, which can tolerate three-pump laser failure at most in each fiber pair.
The submarine cables use DWDM (Dense Wavelength Division Multiplexing) so the data capacity is maximized at hundreds of Gbps (gigabits per second). The cables also employ optical amplifiers (a.k.a. repeaters) that boost the signals across each cable section.
So, yes, laying a cable from, say, Oregon to Japan is quite complicated and time-consuming to say the least.
From Land to Sea to Land
Once the cable (or more accurately, series of cables) reaches its land-based destination, it is connected to submarine line terminal (SLTE) and power feed equipment (PFE) installed at a cable landing station. The PFE can be installed at a cable landing station, while at another inland location — such as a data center or central office — the SLTE is set in place.
There is, like everything in the tech world, much more to the technology, but that is a basic look at how your laptop gets connected to a webpage in Japan or Spain.
0 notes
Text
4 Ways to Select the Right Fiber Optic Mode and Type
by Scott Mortenson
We understand the basic composition of a fiber optic cable — core, cladding, buffer — and that there are glass or plastic cores. Both zip light through them, but because this is technology, it gets more complicated as we go along.
The need for fiber optics in data centers for Internet Service Providers (ISPs), long distance telephone systems, networking, medical, military, aerospace, mechanical, automotive, etc., seems to be limitless. If you can think of a specific industry or market, there’s most likely a specific type of fiber optic to make the proper connection.
Single-Mode and Multi-Mode Fiber
Single-mode fiber is small (about 9 microns, or 0.009 millimeters) so the laser only goes in one mode and is typically used for telephony and cable television. Because it has a lower loss rate and virtually infinite bandwidth, the laser source is at 1310nm (0.00131 millimeters) and 1550nm.
For reference, a human hair is 0.08 to 0.12 mm thick — or 70-120 microns.
Multi-mode fiber has light shooting through the core (either 50 or 62.5 microns) which supports multiple modes (rays) of light. MMF is generally used with low-cost light sources like LEDs, with wavelengths between 850nm and 1300nm for slower networks running at gigabits per second or more.

Plastic Optical Fiber
There’s also Plastic Optical Fiber with a 1mm core used for short, lower speed networks such as those within storage systems and internal data center security. Plastic Clad Silica (or Hard Clad Silica) has a thin plastic cladding on a glass core and is around 200 microns.
Since fiber optics is all about carrying and controlling the light/laser, refraction is something that needs to be considered as it makes the light bend. In multi-mode fiber, depending on the angle, the ray of light can be lost in the cladding, or reflected back into the core. Basically, the light can go bouncing around in the core and it may not do what you want if you’re not aware ahead of time.
Single-mode fiber (SMF) doesn’t share that problem as the core is so small, there’s nowhere for it to bounce. While the choice of core material (glass, plastic) affects chromatic dispersion, there is no problem with modal dispersion.
Bend Insensitive and Step Index Multi-Mode Fibers
Because optical fiber is sensitive to bending, and bending causes stress to the light, which can result in loss, there are also Bend Insensitive Fibers. It adds a “low index layer” of glass to reflect the “lost” light back into the core, making it less sensitive to loss from bending. And it can be used in single-mode and most multi-mode fibers.
Step index multi-mode fiber is made one type of optical material and the cladding is another type offering different optical characteristics.
Since it has dispersion caused by the different path lengths of the various modes traveling in the core, this type of fiber is too slow for situations where speed is needed. Step index fiber is not widely used, mainly for consumer audio and TV links.
One of the factors that limits the bandwidth is dispersion, which is the widening or spreading of pulses of light transmitting data as they travel down an optical fiber.
Because there are different path lengths of the modes, graded index multi-mode fiber offers hundreds of times more bandwidth than step index fiber. The path of light is not continuous, but goes from hundreds of steps to thousands. As the light travels through each step, it is bent slightly to reflect back to the core. Grade index MMF is primarily used for premises networks, LANs, fiber to the desk, CCTV and other security systems.
The Five Grades of Multi-Mode Fiber
Originally, multi-mode fibers came in several sizes for various networks and sources. Eventually, it became standardized as the OM1 standard with 62.5 core fiber and a 125 micron cladding. OM2, with 50/125 fiber, enables gigabit connections over greater spans for LANs.
Newer OM3 or laser-optimized 50/125 fiber is often considered the best choice for multi-mode applications. OM4 fiber offers a higher bandwidth for 10G+ networks. And OM5 is wide-band multi-mode optimized for wavelength division multiplexing (WDM) for Vertical-Cavity Surface-Emitting Lasers (VCSELs) in the 850-950nm range.
To make sure you get the right fiber optic type for your project, you’ll need size (the core and cladding diameters in microns), the attenuation coefficient (dB/km) and bandwidth (MHz-km) for multi-mode. Single-mode fiber needs chromatic polarization-mode dispersion.
The bottom line is there’s a type of fiber optic cable for nearly every need, situation, and project — large and small — to fulfill the client’s requirement with (hopefully) room to grow. Data center facilities may use a mixture of copper cables and glass fiber, and each has its pluses and minuses.
0 notes
Text
Approved Networks Performance and Logistics Expertise Helps a Large Service Provider Save in Multiple Ways
A large US service provider was under considerable pressure to reduce capital and operating expenses for a large deployment in their edge network. They identified open standards-based optics as the easiest way to reduce these costs but wanted to take their savings a step further by eliminating the cost of carrying spare parts inventory as well.
0 notes
Text
Distortion
by Eric Dalen (pseudonym)

0 notes
Text
Confusion
by Eric Dalen (pseudonym)

0 notes
Text
The Fear of The Dark
by Eric Dalen (pseudonym)

0 notes
Text
Approved Networks Releases 100G QSFP28 Universal Transceiver
by Scott Mortenson

Approved’s New QSFP28 Universal Transceiver Provides an Alternative Solution for Multi-Mode LC and Single-Mode CDWM4 Environments
Lake Forest, California (December 1, 2022): Approved Networks, a brand of Legrand, the industry leader and authority in programming, testing, and distribution of quality third party optics, announced today the addition of 100GBASE QSFP28 Universal transceiver for both multi-mode and single-mode applications.
The QSFP28 Universal provides an alternative solution for 100G multi-mode Lucent Connector environments, with the added feature of compatibility with single-mode CDWM4 optics up to 2km.
“Existing multi-mode LC solutions are difficult to source – up to 40 weeks in some cases – and they’re more expensive,” said Brian Patton, VP, Engineering, Approved Networks. “Our universal solution not only costs less, but is in stock, and offers the single-mode interoperability that no other multi-mode LC solution has.”
The QSFP28 Universal transceiver is part of a deep and diverse family of optical devices from Approved Networks, including Direct Attach Copper, Active Optical Cables, Active Electrical Cables, and passive WDM solutions. The QSFP28 Universal joins an extensive line of transceivers for Enterprise Data Centers, Managed Services Data Centers, Colocation Data Centers, and Cloud Data Centers.
Find out more about the QSFP28 Universal here.
About Approved Networks, a brand of Legrand
Approved Networks, a brand of Legrand in the Data, Power, and Control Division, provides cost-effective, high-performance optical solutions to a global network of Fortune 500 Enterprise, Data Center, and Service Provider partners. For over 30 years, Approved has been the industry authority on OEM alternative optical networking connectivity through a commitment to technical engineering, stringent quality standards, extensive testing capabilities, and dedicated customer service and support – before, during, and after deployment. Over 10,000 customers in more than 40 countries trust Approved Networks transceivers, DACs, AOCs, and passive solutions to light their networks. https://www.approvednetworks.com
About Legrand
Legrand is the global specialist in electrical and digital building infrastructures. Its comprehensive offering of solutions for commercial, industrial and residential markets makes it a benchmark for customers worldwide. The Group harnesses technological and societal trends with lasting impacts on buildings with the purpose of improving life by transforming the spaces where people live, work and meet with electrical, digital infrastructures and connected solutions that are simple, innovative and sustainable. Drawing on an approach that involves all teams and stakeholders, Legrand is pursuing its strategy of profitable and responsible growth driven by acquisitions and innovation, with a steady flow of new offerings—including products with enhanced value in use (faster expanding segments: datacenters, connected offerings and energy efficiency programs). Legrand reported sales of €7.0 billion in 2021. The company is listed on Euronext Paris and is notably a component stock of the CAC 40 and CAC 40 ESG indexes. (code ISIN FR0010307819). https://www.legrand.us/
0 notes
Text
Approved Networks Open Network Solution Passes the Test, Excels in Service Provider Network
A service provider based in a large Midwestern city needed a new solution for a brownfield application. Their existing equipment was operating at capacity and was rapidly approaching the end-of-life deadline. While open network solutions would fulfill their capacity requirements and deliver the additional functionality they needed, it was also necessary to prove interoperability with their existing OEM routers.
0 notes
Text
How to Address the Price Increase Dilemma
by Scott Mortenson

Price increases are affecting data centers across the country as chip shortages and extended lead times stretch to a year or more.
Name an OEM (Original Equipment Manufacturer) in the fiber optic industry, and they have either announced a price increase, or will soon. The reason for these hikes can be placed on far-off situations like the COVID-19 lockdown in China or the war between Russia and Ukraine, but it’s still an issue for us at home. Demand is high, but inventory is not.
If all you seem to be seeing are price hikes, lack of stock and messages about ever-increasing lead times, it might be time to look for an alternate source. Do you have a contract or purchasing agreement with an OEM? More than one? Here are a few steps you can take to reduce or maybe avoid price increases. You could even save some money!
• Review your agreement inside and out and consider the fair market value range from when the agreement was initiated to the present.
• Do a price benchmark analysis to identify the fair market value and bring to light potential saving opportunities.
• Consider whether the contract still makes sense for your company as hikes in prices can undermine what were once advantages.
• Go to the OEM and renegotiate better pricing . . . but that won't address the inventory shortages and long leads times.
• If your agreement allows you to use alternate sources, do it -- at least check out the alternatives. Even if the contract seems to block that particular door, shop around anyway.
One of the advantages of a 3rd-party supplier like Approved Networks is our independent ability to bring in inventory and not wait for you to order it. Maintaining high stock levels is business as usual.
OEM-alternatives like Approved Networks can save you 60%, 70%, 80% off the OEM price, while keeping your budget on track and inline. Plus, the lower prices of the alternatives make their increases look worse.
Of course, you want quality that matches the OEM. We source our products from the same contract manufacturers as the OEMs. Our transceivers, DACs, AOCs, AECs, cables, passives, etc., are not only from the same Tier 1 suppliers, but we back them with an industry-leading warranty.
The "Price Increase Dilemma" is really only a dilemma if you're stuck with limited sourcing. Our advice: Do your research, compare your contract pricing with alternative suppliers, and act accordingly. You might be surprised how a predicament could turn into a positive.
1 note
·
View note
Text
Approved Networks Delivers Much-Needed Bandwidth to Regional Data Services Provider… Fast!
A data services provider in the Southeastern United States was faced with a serious time crunch. They needed to deploy greenfield services for one of their long-time customers by the end of November. With the Thanksgiving holiday fast approaching, they were struggling to source all of their equipment in time.
0 notes
Text
What’s Going on with High Speed to Small Communities?
by Scott Mortenson

Roughly 65% of rural communities have access to broadband and high-speed Internet. But what about the other 35%? Why don’t they have access?
The reason for this is fairly simple to understand: Money.
This is particularly sensitive if we understand that Internet access has become a necessity, right up there with water and electricity. So you would think the people hooking all this up would be falling all over themselves to get access to these new customers. But it doesn’t quite play out that way.
For example, if there's a small community of a few hundred people, it would take an Internet Service Provider (ISP) years, if not decades, to recoup their costs of laying wire and/or fiber to connect those folks. Especially if Uncle Charlie and Aunt Mabel live out of town at the end a 7-mile gravel road. Why would an ISP spend tens of thousands (hundreds of thousands?) to dig trenches and bury miles of cable/wire/fiber to hook them up?
Believe it or not, the government understands the importance of getting Uncle Charlie and Aunt Mabel connected. And it’s not just the Federal Communications Commission who wants to wire them up, but another bureaucracy: The USDA.
As U.S. Department of Agriculture Secretary Tom Vilsack said: “Connectivity is critical to economic success in rural America. The Internet is vital to our growth and continues to act as a catalyst for our prosperity."
This is because rural towns and communities contribute a lot to the economy, especially farms and farmers, and that's why the people in DC are giving incentives for high-speed Internet. It’s called the USDA's Telecommunications Infrastructure Loans and Loan Guarantee program -- https://www.rd.usda.gov/programs-services/telecommunications-programs/telecommunications-infrastructure-loans-loan-guarantees
If you clicked that link, you’d see roughly $400 million being given away. Eligible service providers looking to construct, expand or improve their networks can apply for financial assistance.
But that $400 million is just a drop in the bucket. And that’s where RDOF comes in.
R . . . what?
The Rural Digital Opportunity Fund (RDOF.com) is the FCC’s answer, and has allocated an unprecedented $20.4 BILLION (yes, with a “B”) for the construction of broadband networks in rural communities. And there are others as well trying to organize and deploy broadband to the smaller towns and villages.
In fact, it’s entirely possible that Uncle Charlie and Aunt Mabel (out at the end of that 7-mile gravel road) are already hooked up with Fiber To The Home, while their neighbors in Small Town, USA are barely getting by with DSL.
RDOF is just a piece of the puzzle, and they will tell you “traditional communications providers – who often overlook rural America as not profitable enough – will be competing hard for RDOF dollars.”
Instead of “Build it and they will come” it’s now “Pay for it and they will build it.”
Even if your local small-town ISP applies for, and gets, RDOF funding, how long do you think it would take to deploy it? It would be safe to say your weekends will be free for the next year or two. Still, as the old joke goes, you can’t win the lottery if you don’t buy a ticket. ISP’s will need to apply for the programs before they can be considered. And they’ll need to explain what they’ll do and how they’ll do it.
In the early-to-mid 2000’s, when mobile phones were becoming popular, you could drive from, say, Los Angeles to Las Vegas and spend most of that time without a cell signal – because the telephone companies hadn’t hooked up the towers yet. And why should they? There’s only about 12 people who lived out there.
But then the telcos realized (or got a lot of complaints because) there are a million cars traveling on Interstate 15. And if your car breaks down in the Mojave Desert when it’s 115 degrees, a cell phone becomes rather important.
Now you can drive from So-Fi Stadium to the Bellagio without dropping a bar.
Yes, high-speed access to small, neglected communities has not only become important, but a necessity. And while there’s still a lot of marketing going on between companies trying to entice customers away from their competitors, the next battleground looks to be who gets to light up the property of Uncle Charlie and Aunt Mabel.
It may not be fought with flashy billboards and multi-million dollar advertisements, but quietly, calmly and methodically behind the scenes and out of the spotlight. Maybe it’s time to do some research and to find out what the next step might be.
0 notes
Text
What is the Catalyst for Change?
by Scott Mortenson
Originally published July 25, 2022

Heraclitus, a Greek philosopher, has been quoted as saying “Change is the only constant in life.” Ironically, this was supposedly in a book he wrote which was destroyed and only exists in fragments.
Sometimes change can go too far. But it’s certainly true that in the world of technology, change is constant.
10G was cutting-edge fast for fiber optics a few years ago. Now Data Centers are switching up to 100G if they’re not already there. Many are considering 400G. What’s on the horizon? Yup – 800G.
Changes in Data Center tech are not always mandatory . . . but they may be necessary. There are many variables to consider. Business needs, of course, come first. If they’re being achieved, something has got to change, but making a change after the fact means you’re only trying to keep up, not stay ahead of the curve.
Change for change’s sake is not a cure-all either. Upgrading to the latest and greatest can be a waste of both money and resources if you’re too far ahead of the curve. Sure, you’ll have more “headroom” but it’s like a manufacturer buying a warehouse that’s far too big for their current needs in anticipation of growth they hope will happen.
The trick is to keep your solutions provider apprised of your requirements and budget so we can help you manage growth and expectations. The catalyst for change is your needs based on your past, present and future. Keeping in touch with your account manager on a regular basis is important, not only to monitor your current needs but to maintain an understanding of what is going well for you, and where we can offer solutions.
Very rarely are needs reduced – they almost always increase. You will add clients, technology, servers, connections, and bandwidth. As the pandemic showed us, requirements can change overnight, and what was once optional becomes a priority.
While no one expects (or hopes) another pandemic will happen, it did teach us how nimble and proactive the industry needs to be in relation to business needs. They change, and we help you change with them.
The more we are able to keep our finger on the pulse of your business needs, the more agile and responsive we can be and stay not only one step ahead of the issues you face, but also out front of your competitors.
0 notes
Text
To DAC or AOC – That is the Question
by Scott Mortenson
Originally published July 18, 2022

So, you want to get your Data Center upgraded to 400G technology — good move. Though going from 40G or 100G up to 400G won't "future proof" your set-up, it will improve transmission and efficiency greatly.
But while deciding to make that transition is one step — and getting the budget for it approved is another — your next level is asking what equipment and connections you need to cost effectively get the job done. The criteria seems simple on the surface, but is critical for the best design and operation. Cabling is one of the more important aspects, and with the 400G Active Optical Cables on the scene, you may be looking at the advantages and disadvantages.
In order to choose effectively, the first consideration is distance. DAC (Direct Attach Copper cables) are better for short top-of-rack or rack-to-rack runs and are generally the preferred way to interconnect inside server racks. Being passive, the twinaxial copper (two inner connectors instead of one) are good for high-speed but only for short distances — say 7 meters or less. Because of this, they are less expensive than short-range optical solutions with fewer components to provide a more consistent transmission performance. DACs also do not produce heat, making it more versatile in wider temperature ranges.
If you're in need of speed at a greater distance (up to 100 meters), AOC is probably just the ticket. They are most often used to create 3-30 links between switch-to-switch or switch-to-server links inside hyperscale, cloud, enterprise and government data centers. For best practices, we recommend deployments of AOC no longer than 30 meters.
Another benefit of AOCs is they have roughly half the volume and a quarter of the weight over DACs — which may be vulnerable to outside interference. (DACs are shielded cables, and while they are not immune to electromagnetic interference, the impact is greatly reduced.)
400G AOC is fiber optic, so it doesn't conduct electrical currents, and resists interference from electromagnetic and radio signals. And while the 400G DAC is less expensive than the same-length 400G AOC, if you're looking at a distance over 5 meters, AOC is what will meet your needs.
Another consideration is if you need compatibility for a specific OEM vendor because that will be important as well, as is testing to make sure they work, work well, and have a warranty to back them up.
Chances are, Active Optical Cables are the right solution in the long run. But being careful and making the right choices ahead of time is something everybody wants to avoid hiccups later.
Got questions? Consult with an Approved Networks expert to select the best items for your Data Center and connection needs.
0 notes
Text
The Supply Chain Crisis is an Opportunity to Reimagine Your Strategy
By Scott Mortenson
Originally Published June 24, 2022
It’s bad enough there are chip shortages, fiber shortages, hardware shortages and component shortages affecting the fiber optic industry. Now there’s another layer causing concern: a labor shortage.
Even if a magic wand were passed over the land and all the product scarcities were removed overnight, there’s a sufficient lack of qualified, skilled people to get it all connected and running. The material and hardware concerns are a sore point. Supply chains were already getting squeezed before COVID-19 raised its head. Then the pandemic all but ground production to a complete halt.
Now that factories are operating again – though perhaps not at full capacity – it will take time to get through the backlog of orders and shipments. Fiber build-out capacity is not keeping up with demand, delaying service provider network expansion. Consumers may see an ad touting a new fiber network, only to go online and discover it’s not available in their neighborhood yet. Part of this is certainly due to the material shortages, but a lack of skilled labor adds another speed bump to the process.
There are probably more than enough people to fill the roles needed to help connect the networks coast to coast . . . but training new workers takes time – anywhere from 6 months to 1 year. Some companies hire trainees, putting them through the necessary courses, basically paying them to go to school. The field work may not be getting done until the “students” graduate. While it is good to invest in people for the success of the company and the industry, it’s not a quick undertaking.
Alternately, the Telecommunications Industry Registered Apprenticeship Program is adding to the workforce, and others have partnered with local colleges, universities and trade schools.
So, between the supply chains not getting filled as quickly as demand would like, and the bottlenecks at the harbors and freight yards where material sits, waiting to get shipped and delivered, there’s not enough qualified people to get everything connected, powered and running.
AT&T, for example, had targeted 3 million homes for fiber rollouts, but fell short by 500,000. Small ISPs got hit even harder, reporting widespread delays, not only for fiber, but the electronics to run it – routers, optical network terminals, modems, etc. What was a 3-to-4-week delay turned into 12 weeks, with much of the 2021 planned construction pushed into 2022.
And it’s not just a fiber shortage haunting purchasing departments. Challenges in getting parts of all types – server equipment, switches, cables – made shelves less stocked than companies wanted them to be. While Fiber-to-the-Home is profitable and a big moneymaker for the companies providing it, if they can’t get the fiber and associated hardware to make it work, there are only costs.
Fortunately, as the Coronavirus loosens its grip on society and the workforce, and supplies start flowing again, the backlog and headaches will start to ease up. Planning and patience will take the place of pandemics and pressure. Yes, demand will continue to increase, always out-stripping supply capabilities, but at least it should be somewhat manageable, if not reasonable.
The shortage of pretty much everything may be another “new normal”, but at least it is a known issue. And although it’s not an ideal predicament, there is a light at the end of the tunnel. Approved Networks is fortunate to not have all our eggs in one basket, as we utilize several reliable custom manufacturers to keep the supply chain moving. Transceivers, DACs, AOCs, cables, passive WDMs, rack mount sliding panels, MTP cassettes, adapter plates and more keep our customers up and running.
We may not have the solution for labor difficulties, but we’ve got the products for your fiber optic network needs.
1 note
·
View note
Text
Should You Be Worried About Rising Debt-To-Income Ratios for Mortgages?
by Scott Mortenson March 29, 2018
What Are Debt-To-Income Ratios?
Debt-to-Income (DTI) ratios have long been used in the lending industry to determine a borrower’s ability to repay. A 40% DTI means that if a borrower makes $10,000 per month, their mortgage payments, car payments, and other debts combined will equal $4,000 per month. Thus, the borrower has $6,000 per month in spending money. While this may sound like a comfortable monthly cushion, the figures change drastically with less income. A borrower making $2,500 per month with a 40% DTI is left with only $1,500 each month.
Even worse, the DTI is calculated with gross income, rather than net. In reality, a borrower in this situation would be left with ~$1,000 per month to cover utilities, cell phone, car insurance, gas, food, and personal expenses. Suddenly, it becomes clear that maintaining comfortable ratios is an essential component of the market’s health.
Increased DTI’s Per Fannie Mae
After bumping up the maximum DTI from 45% to 50% last year, Government Sponsored Entities have seen a large uptick in DTIs over 40% with a significant number pushing 50%. As of March 17, 2018, Fannie Mae updated its Desktop Underwriter software to help the origination of high debt-to-income mortgages, in response to the rising demand.
According to Fannie Mae, the percentage of single-family purchases with a DTI over 45% was only 5% in 2016 and climbed to 10% in 2017. In the 4th quarter of 2017, that figure reached a whopping 20%.
As QuestSoft Sees It
At QuestSoft, we are seeing evidence of this inflation of the back-end ratio. In an analysis of 100 recent loans that have PMI, QuestSoft Verifications found that 41% of them had DTIs over 45%, and an incredible 11% had DTIs over 49%. This highlights a massive number of borrowers who are strapped for cash & “house heavy.”
Pushback by The Mortgage Insurers
During the financial crisis, mortgage insurers and lenders were often at odds over the insurance claims resulting from defaulted loans. According to the Financial Crisis Inquiry Commission, by 2010 the nation’s largest PMI companies “had rejected about 25% of the claims brought to them.” The banks fought back against this, arguing that the insurers should be paying a much higher number of claims.
Mortgage Insurance
In the end, although some insurers didn’t survive, the remaining insurers and lenders adopted new rules to provide predictable default rates and dependable coverage. These new rules set strict guidelines as to what would and would not be paid out. A few of the nations leading Mortgage Insurance companies have agreed to insure mortgages with a 45-50% DTI as long as the loan meets strict criteria, such as the borrower needing a 700+ FICO score.
Looking Forward
DTIs are undoubtedly on the rise, and it will be interesting to see how this plays out in the long run. It is important to note that although DTIs are rising, loans now have more verifications & hoops to jump through than ever before, which can help cushion a few of the fears regarding an extra 5% DTI ratio. The clear guidelines by the Mortgage Insurance companies are also likely to discourage frivolous lending, as no one wants to see their balance sheets in the red.
Even if the increased DTIs do not lead to a higher level of default, they do create more cash strapped borrowers who spend less money in the other aspects of their lives, which can have significant effects on the economy.
QuestSoft Verifications can perform fully documented verifications that meet investor and regulatory guidelines. Give us a call today to learn more!
0 notes