The equivalent carbon content concept is used on ferrous materials, typically steel and cast iron, to determine various properties of the alloy when more than just carbon is used as an , which is typical. The idea is to convert the percentage of alloying elements other than carbon to the equivalent carbon percentage, because the iron-carbon phases are better understood than other iron-alloy phases. Most commonly this concept is used in welding, but it is also used when heat treating and casting cast iron.
In welding, equivalent carbon content (C.E) is used to understand how the different alloying elements affect hardness of the steel being welded. This is then directly related to hydrogen-induced cold cracking, which is the most common weld defect for steel, thus it is most commonly used to determine weldability. Higher concentrations of carbon and other alloying elements such as manganese, chromium, silicon, molybdenum, vanadium, copper, and nickel tend to increase hardness and decrease weldability. Each of these elements tends to influence the hardness and weldability of the steel to different magnitudes, however, making a method of comparison necessary to judge the difference in hardness between two alloys made of different alloying elements. There are two commonly used formulas for calculating the equivalent carbon content. One is from the American Welding Society (AWS) and recommended for structural steels and the other is the formula based on the International Institute of Welding (IIW).
The AWS states that for an equivalent carbon content above 0.40% there is a potential for cracking in the heat-affected zone (HAZ) on flame cut edges and welds. However, structural engineering standards rarely use CE, but rather limit the maximum percentage of certain alloying elements. This practice started before the CE concept existed, so just continues to be used. This has led to issues because certain high strength steels are now being used that have a CE higher than 0.50% that have brittle failures.