Accreditation Requirements: Content

Photo by Charlotte May on Pexels.com

In a previous post, the meaning and impact of an engineering program accreditation was discussed. Here, let’s look at what sorts of things an engineering program has to show or contain to meet the minimum accreditation requirements. These requirements are contained in the rather arcane CEAB document “Accreditation Criteria and Procedures” available online. I’ll try to summarize the highlights of this document, although there are a bunch of small details and sub-criteria that I will not get into.

Curriculum Content

All engineering programs must contain certain content, broadly speaking. Roughly (showing minimum percentages of the total program hours) programs must include:

  • Mathematics (>10%) including linear algebra, calculus, probability, statistics, numerical analysis.
  • Natural sciences (>10%) including some physics and chemistry, and possibly life sciences & earth sciences.
  • Engineering science and design (>50%). “Engineering science” includes application of math & natural science to practical problems, materials, fluid mechanics, electronics, environmental science, and others specific to the discipline. “Engineering design” involves the process of decision-making to devise products, processes, components, etc. to meet specified goals, which typically include considerations of health & safety, sustainability, economics, human factors, feasibility, regulatory compliance, etc.
  • The curriculum must finish with a significant design experience carried out under an engineering-licensed faculty member.
  • Complementary studies (>12%) which must include economics, humanities & social sciences, communications, impact of technology on society, health & safety, sustainable development, professionalism, ethics, equity and law.
  • The curriculum must include appropriate laboratory experience.

Not all the topics mentioned above have to be the subject of an entire course on their own, they can be parts of other courses. The specific courses and content will also depend on the engineering discipline to some extent. For example, Boolean algebra isn’t typically taught in Chemical Engineering but is in Computer Engineering and likewise organic chemistry isn’t taught in Computer Engineering but is in Chemical Engineering.

There are some other related criteria and constraints, like the minimum number of total curriculum “hours” (roughly at least 1,850 lecture hours, but it’s complicated how these are counted) and minimum splits between engineering science and design, but that covers the main points. With all these requirements, it is easy to see why engineering programs in Canada are typically very structured and have relatively few elective courses compared to many other programs in arts, mathematics and sciences.

Program Environment

Aside from the curriculum content, the engineering programs have to have a suitable “environment”. This includes the quality, morale and commitment of students, faculty, support staff and administration. The quality, suitability and accessibility of labs, library, computing and non-academic counselling must also be satisfactory. Other factors include:

  • The governance structure of the programs, from the Dean down to the curriculum committees must be suitable and fully within the control of engineering faculty members, especially those holding engineering licenses.
  • There must be sufficient financial resources for programs to recruit and retain qualified staff and to maintain and renew infrastructure and equipment.
  • Engineering faculty must have a high level of competence and expertise, as demonstrated by
    • education level
    • diversity of background, including non-academic experience
    • experience and accomplishments in teaching, research and/or engineering practice
    • participation in professional, scientific, engineering and similar societies
  • A significant portion of the faculty are expected to be licensed to practice engineering in Canada, and especially those teaching courses that involve engineering science and design (typically upper year courses and electives).

This summarizes some of the requirements, but there are actually another whole bunch called “graduate attributes”. That will have to be the subject of another post, since it’s quite long.

Accredited Engineering Programs

Photo by Markus Spiske on Pexels.com

When high school students are looking at applying to engineering in Canada they might run across something stating that the institution’s program is “accredited”. In fact, online there is a whole list of Accredited Engineering Programs in Canada that you can consult if the institution website is not clear about this. All engineering programs at Waterloo are accredited. But what does this mean, and why does it matter?

“Accredited” simply means that the program has been reviewed on a regular basis by the Canadian Engineering Accreditation Board (CEAB, part of Engineers Canada) and that it meets or exceeds certain minimum educational standards. A future post will go into more depth on these standards, since they’re a bit complicated. Suffice to say for now that the standards include what is taught, how it’s taught, who does the teaching, how good the facilities are, and various other aspects.

Why does it matter? Well, in Canada if you want to practise “engineering”, fulfill certain roles that have regulatory requirements, and refer to yourself professionally and in public as an “engineer”, you need to hold a license from the provincial body that regulates engineering (PEO in Ontario, for example). I hold a license in Ontario to practise chemical engineering and can use the title “P.Eng.” (professional engineer) in official business. In Ontario, you can look up to see if someone holds a valid engineering license using the PEO Directory.

To get the engineering license, you need to demonstrate that you have the required educational background (among other things). If you graduated from an accredited undergraduate engineering program, it is automatically a given that you have the background and that hurdle is cleared. If you didn’t graduate from an accredited program (for example, an engineering program from a foreign country), you’ll have to go through a long documentation process and possibly write a variety of technical exams to prove your background competency. These exam cost money to write and are not easy, so graduating from an accredited program saves a lot of time, money and effort.

The accreditation and licensing landscape is somewhat similar in the U.S., where ABET (Accredidation Board for Engineering and Technology) examines programs and each state has their own specific professional engineering (P.E.) requirements. There are also various differences, and a license in one state or province is not necessarily transferable to another state or province, so it’s a bit complicated and I’m no expert on that. The bottom line however is that graduating from an accredited program makes life much easier if you intend to be a legally-recognized engineer somewhere.

Professor Emeritus

Quite a few years ago I wrote “A Guide to University Nomenclature“, which included the various titles of academic personnel. Apparently I left out the title I now hold, i.e. “Professor Emeritus”, so I should add something about that!

Photo by Vanessa Garcia on Pexels.com

What does “emeritus” mean? According to Wikipedia it’s an adjective for an honorary title granted to someone who retires from an academic position but is allowed to continue using the previous title. Essentially Professor Emeritus is a professor who is retired, which I did in 2021.

At Waterloo the title of Professor Emeritus is awarded automatically at retirement for faculty who have served at least 15 years. It comes with the following list of benefits: 1) 75% discount on parking passes at the University, 2) … actually that’s it. Come to think of it, all retirees get the same discount whether faculty or staff, so never mind. Any other benefits are negotiable with your former department.

In my case, I still have an office and some lab space because I still manage some research projects and co-supervise some graduate students. Sort of a retirement hobby I guess, since I’m not paid for that. I am currently paid for the Winter 2024 term to teach a course on Air Pollution Control as a “Sessional Lecturer”, since the Chemical Engineering department is short-staffed and didn’t seem to have anyone available to teach it. That’s another role that a Professor Emeritus might fill, if needs arise and they are willing.

Blog Redux

It’s been quite a while since I’ve posted on this blog, for a variety of personal and some professional reasons. I’ve kept the site alive and functioning (I think?) in the meantime, since the stats show that there continues to be about 100 visitors per day. I guess there is some interest and value in the old posts.

I do intend to start posting again on some sort of regular basis. There are all sorts of topics about engineering (chemical in particular), education, academia, and maybe even admissions, that I have long had plans for. If there are any specific topics of interest to visitors let me know in the comments. See you later!

Is the HEPA Helping?

Once the role of airborne/aerosol transmission of COVID-19 became more recognized, lots of places starting putting HEPA filter devices into offices, classrooms, and various other locations. HEPA (High Efficiency Particulate Air) filters were initially created in the 1940s to help remove radioactive materials from air in labs and manufacturing spaces (during the development of the atomic bomb). Since then they have found common use in labs, manufacturing and other spaces were fine particles need to be controlled, and this includes removal of biological pathogens from air. Generally, a HEPA filter is one that can remove at least 99.97% of 300 nm (or 0.3 micrometre) sized particles from air that travels through it.

Photo by CDC on Pexels.com

At first glance, 99.97% efficiency seems quite impressive and a good level of protection from bacteria and viruses. However, the reality is somewhat more complicated. The basic question is whether your HEPA device sitting in the room is significantly reducing pathogen exposure or not? Like many engineering questions, it depends on the context and here we will explore some of those factors.

Continue reading

Ultraviolet light can make indoor spaces safer during the pandemic – if it’s used the right way

A nice article by Prof. Karl Linden at U Colorado, republished from “The Conversation” under CC license. Prof. Linden is a well-known fellow member of the UV research community and IUVA organization. I couldn’t say it any better than him!

Institutions like hospitals and transit systems have been using UV disinfection for years. Sergei Bobylev\TASS via Getty Images

Karl Linden, University of Colorado Boulder

Ultraviolet light has a long history as a disinfectant and the SARS-CoV-2 virus, which causes COVID-19, is readily rendered harmless by UV light. The question is how best to harness UV light to fight the spread of the virus and protect human health as people work, study, and shop indoors.

The virus spreads in several ways. The main route of transmission is through person-to-person contact via aerosols and droplets emitted when an infected person breathes, talks, sings or coughs. The virus can also be transmitted when people touch their faces shortly after touching surfaces that have been contaminated by infected individuals. This is of particular concern in health-care settings, retail spaces where people frequently touch counters and merchandise, and in buses, trains and planes.

As an environmental engineer who studies UV light, I’ve observed that UV can be used to reduce the risk of transmission through both routes. UV lights can be components of mobile machines, whether robotic or human-controlled, that disinfect surfaces. They can also be incorporated in heating, ventilating, and air-conditioning systems or otherwise positioned within airflows to disinfect indoor air. However, UV portals that are meant to disinfect people as they enter indoor spaces are likely ineffective and potentially hazardous.

What is ultraviolet light?

Electromagnetic radiation, which includes radio waves, visible light and X-rays, is measured in nanometers, or millionths of a millimeter. UV irradiation consists of wavelengths between 100 and 400 nanometers, which lies just beyond the violet portion of the visible light spectrum and are invisible to the human eye. UV is divided into the UV-A, UV-B and UV-C regions, which are 315-400 nanometers, 280-315 nanometers and 200-280 nanometers, respectively.

The ozone layer in the atmosphere filters out UV wavelengths below 300 nanometers, which blocks UV-C from the sun before it reaches Earth’s surface. I think of UV-A as the suntanning range and UV-B as the sun-burning range. High enough doses of UV-B can cause skin lesions and skin cancer.

UV-C contains the most effective wavelengths for killing pathogens. UV-C is also hazardous to the eyes and skin. Artificial UV light sources designed for disinfection emit light within the UV-C range or a broad spectrum that includes UV-C.

How UV kills pathogens

UV photons between 200 and 300 nanometers are absorbed fairly efficiently by the nucleic acids that make up DNA and RNA, and photons below 240 nanometers are also well absorbed by proteins. These essential biomolecules are damaged by the absorbed energy, rendering the genetic material inside a virus particle or microorganism unable to replicate or cause an infection, inactivating the pathogen.

It typically takes a very low dose of UV light in this germicidal range to inactivate a pathogen. The UV dose is determined by the intensity of the light source and duration of exposure. For a given required dose, higher intensity sources require shorter exposure times, while lower intensity sources require longer exposure times.

Putting UV to work

a robot emitting ultraviolet light in an empty hospital room
UV disinfection, which can be performed by robots like this, reduces hospital-acquired infections. Marcy Sanchez/William Beaumont Army Medical Center Public Affairs Office

There is an established market for UV disinfection devices. Hospitals have been using robots that emit UV-C light for years to disinfect patient rooms, operating rooms and other areas where bacterial infection can spread. These robots, which include Tru-D and Xenex, enter empty rooms between patients and roam around remotely emitting high-power UV irradiation to disinfect surfaces. UV light is also used to disinfect medical instruments in special UV exposure boxes.

UV is being used or tested for disinfecting buses, trains and planes. After use, UV robots or human-controlled machines designed to fit in vehicles or planes move through and disinfect surfaces that the light can reach. Businesses are also considering the technology for disinfecting warehouses and retail spaces.

ultraviolet light filling the interior of an empty New York City subway car
The New York City Metropolitan Transit Authority (MTA) is testing the use of ultraviolet light to disinfect out-of-service subway cars. MTA, CC BY-SA

It’s also possible to use UV to disinfect air. Indoor spaces like schools, restaurants and shops that have some air flow can install UV-C lamps overhead and aimed at the ceiling to disinfect the air as it circulates. Similarly, HVAC systems can contain UV light sources to disinfect air as it travels through duct work. Airlines could also use UV technology for disinfecting air in planes, or use UV lights in bathrooms between uses.

Far UV-C – safe for humans?

Imagine if everyone could walk around continuously surrounded by UV-C light. It would kill any aerosolized virus that entered the UV zone around you or that exited your nose or mouth if you were infected and shedding the virus. The light would also disinfect your skin before your hand touched your face. This scenario might be possible technologically some day soon, but the health risks are a significant concern.

As UV wavelength decreases, the ability of the photons to penetrate into the skin decreases. These shorter-wavelength photons get absorbed in the top skin layer, which minimizes DNA damage to the actively dividing skin cells below. At wavelengths below 225 nanometers – the Far UV-C region – UV appears to be safe for skin exposure at doses below the exposure levels defined by the International Committee on non-Ionizing Radiation Protection.

Research is confirming these numbers using mouse models. However, less is known about exposure to eyes and injured skin at these Far UV-C wavelengths and people should avoid direct exposure above safe limits. https://www.youtube.com/embed/YATYsgi3e5A?wmode=transparent&start=0 Research suggests that far UV-C light might be able to kill pathogens without harming human health.

The promise of Far UV-C for safely disinfecting pathogens opens up many possibilities for UV applications. It’s also led to some premature and potentially risky uses.

Some businesses are installing UV portals that irradiate people as they walk through. While this device may not cause much harm or skin damage in the few seconds walking through the portal, the low dose delivered and potential to disinfect clothing would also likely not be effective for stemming any virus transmission.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Most importantly, eye safety and long-term exposure have not been well studied, and these types of devices need to be regulated and validated for effectiveness before being used in public settings. The impact of continuous germicidal irradiation exposure on the overall environmental microbiome also needs to be understood.

As more studies on Far UV-C bear out that exposure to human skin is not dangerous and if studies on eye exposure show no harm, it is possible that validated Far UV-C light systems installed in public places could support attempts at controlling virus transmission for SARS-CoV-2 and other potential airborne viral pathogens, today and into the future.

Karl Linden, Professor of Environmental Engineering and the Mortenson Professor in Sustainable Development, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Medical Oxygen

Emergency hospital during 1918 influenza epidemic, Camp Funston, Kansas. (CC-BY-2.5)

There are some questions about whether the SARS-CoV-2 virus is more or less deadly than the 1918 influenza virus. It’s not really possible to accurately compare the two pandemics’ case and fatality data, for one very big reason. Oxygen!

Today, when someone has any sort of respiratory problem the first likely action is to provide supplementary oxygen. The air we breathe contains about 21% oxygen, which for a healthy person is obviously quite adequate. But for sick people, raising the concentration to near 100% can take the load off their heart and lungs and help prevent other problems like organ failure.

In the 1918 pandemic, the use of supplemental oxygen was not widely known or accepted (as illustrated in the picture, where no one is getting oxygen). There had been some experimentation and use in prior decades (and in treating some chemical weapons victims of the First World War), but it was not yet widespread. Even if it had been enthusiastically embraced, the supplies of oxygen were very limited and it was not readily available on an industrial scale. Therefore we read stories of even young and previously healthy people succumbing to the influenza virus within hours, turning blue due to lack of oxygen in their bloodstream. If oxygen had been available, many of them may have had a reasonable chance to survive and recover. Of course, today we have many other pharmaceutical interventions like steroids and monoclonal antibodies, none of which were available in 1918. But the oxygen is still needed to keep the patient alive long enough so that the pharmaceuticals can have a chance at working.

Where do we get the medical oxygen? There are smaller scale purifying units that take air and concentrate the oxygen in it using membranes to separate the oxygen from nitrogen. These are fine for portable use, smaller scales, or lower flows, but they are not readily scalable to the huge volumes required in a hospital and high-flow oxygen therapy. Likewise, oxygen cylinders are not very practical in hospitals due to their limited capacity. For example, the large “T” size cylinders (about 24 cm diameter by 130 cm tall) only contain about 9,000 L of oxygen once it is depressurized. For patients needing high flow oxygen therapy at up to 60 L per minute, the cylinder would only last about 3 hours or less. If there are a lot of patients, it would take a small army of people constantly moving cylinders in and out of rooms and the hospital. Unfortunately, in poorer and less industrialized parts of the world these options are often the only ones available.

The big industrial oxygen supplies are typically provided in the form of liquid oxygen, shipped and stored in specialized trucks and tanks. The liquid oxygen stored at the hospital is then vapourized into its gaseous form and piped to the patient rooms as required. This is a much more compact and efficient delivery system, since one hundred litres of liquid oxygen expands into about 85,000 L of gaseous oxygen for breathing purposes.

The large scale production and supply of liquid oxygen is a chemical and mechanical engineering accomplishment dating back to the early 1900’s. It took several decades for many plants to be built, with continuous improvements over the years to improve the process and reduce energy requirements. The industrial process uses distillation to separate oxygen from nitrogen (and argon and other trace gases) in air. Since oxygen and nitrogen have quite different boiling points (-183oC for oxygen, and -195.8oC for nitrogen), separation by distillation is a reasonably straightforward approach. However, distillation requires that air be liquified through a combination of pressure and low temperature, and this presents some significant engineering challenges.

Modern plants, often called “Air Separation Units” or ASUs, operate at pressures up to about 6 atmospheres and temperatures in the -170 to -190 range. Clearly it takes some significant compression and refrigeration equipment to carry this out, and the plants are carefully designed to be as energy efficient as possible. The video below, from one major manufacturer, gives a simple overview of the ASU process. Of course, ASUs are built not only for medical oxygen, but also for the many other industrial uses of oxygen such as in steel production, metal cutting, water treatment and chemicals manufacturing.

Bibliography:
Graninge, C. Breath of life: the evolution of oxygen therapy. J.R. Soc. Med. 97: 489-493 (2004).
Heffner, J.E. The Story of Oxygen. Respiratory Care, 58: 18-31 (2013).
Tellier, N. Air Separation and Liquifaction. (https://cryogenicsociety.org/resources/cryo_central/air_separation_and_liquefaction/)

These companies are sucking carbon from the atmosphere

Carbon capture is becoming increasingly popular among investors, and these companies are at the forefront.

Source: These companies are sucking carbon from the atmosphere

I’m currently not completely convinced that these “direct air capture” systems that remove carbon dioxide from the atmosphere are very practical. Technically they can certainly work, but the capital and operating costs are probably substantial, compared to the amount of CO2 you recover. However, if they do become widespread (as the linked article suggests), that will keep a lot of chemical engineers busy. And mechanical and electrical engineers too! And civil engineers during the construction phase.

We Don’t Teach Much

“In this way you must understand how laughable it is to say, ‘Tell me what to do!’ What advice could I possibly give? No, a far better request is, ‘Train my mind to adapt to any circumstance’….In this way, if circumstances take you off script…you won’t be desperate for a new prompting.”

Epictetus, Discourses

I ran across this quote from the early 2nd century Stoic philosopher Epictetus the other day (“The Daily Stoic” by Ryan Holiday). It reminded me that in engineering education we can’t possibly teach all the information and facts that one might need after graduation. In chemical engineering, for example, there are thousands of different chemicals, types of equipment, different processes for making so many different products. There are different methods for various pharmaceuticals, papers, metals, solvents, plastics, toothpaste, and the list goes on without end. There is a 27 volume Encyclopedia of Chemical Technology that covers many topics in chemical engineering, but even that has its limitations, even if some superhuman could actually learn everything in it. Forty-five years after starting a chemical engineering program in university and I’m still learning new things every week.

So no, we can’t teach everything an engineer might eventually need to know. We probably can’t even teach a small fraction of what people will eventually know or need to use. So we have to focus on training the engineer’s mind. How to approach problems, how to break them down into logical and manageable pieces. How to understand the science behind new situations. How to recognize the limitations of their skills and knowledge, and how they can address those knowledge gaps (it’s important to know what you don’t know!).

So when students of all sorts ask “why do we have to learn this, when are we ever going to use it?”, the answer may well be “possibly never”. But it’s part of the training of the mind, which definitely will get used eventually.