Category Archives: Science

El nuevo órgano descubierto ni es nuevo ni es un órgano, pero sí es importante

  • El intersticio, descrito como un ‘un nuevo órgano’, siembra la polémica.
  • La falsa afirmación, según los expertos, ha ocultado otros aspectos interesantes de este espacio del cuerpo humano.
cuerpo humano
Lwp Kommunikáció (Flickr)

Numerosos medios se han hecho eco durante la última semana del hallazgo de un nuevo órgano en el cuerpo humano que había permanecido oculto hasta ahora: el intersticio. Mientras algunos aseguran que es hora de reescribir los libros de anatomía, otros señalan que la estructura ya figura en ellos desde hace siglos. ¿Qué se ha descubierto en realidad, si es que se ha descubierto algo?

Las células del cuerpo humano no están ‘selladas’ entre sí como los ladrillos de una casa, sino que existen espacios entre ellas y los tejidos que componen. Este espacio recibe el nombre de intersticio. En su interior encontramos tejido conectivo (colágeno y elastina) y un fluido llamado líquido intersticial.

¿Nuevo órgano? El origen de la polémica

El artículo publicado la semana pasada en la revista Scientific Reports no hace referencia al descubrimiento de un nuevo órgano ni sugiere que el intersticio pueda ser considerado como tal. Los autores aseguran que “la anatomía y composición del espacio intersticial entre las células se entiende cada vez mejor”, si bien su localización y estructura es descrita “vagamente en la literatura científica”.

Mediante una técnica microscópica que muestra tejidos vivos en lugar de fijados, los autores describen “la anatomía e histología de un espacio lleno de líquido, previamente no identificado aunque extendido y macroscópico. Una nueva expansión y especificación del concepto del intersticio humano”. Expandido, pero no descubierto. ¿De dónde sale la idea del nuevo órgano?

El origen de la información está en la nota de prensa publicada en el servicio de noticias científicas Eurekalert!, del que se nutren medios de todo el mundo. El titular: “Un ‘órgano’ nuevo había sido omitido por los métodos estándar”. Citando como fuente a los autores, se asegura que los investigadores “han identificado una característica de la anatomía humana previamente desconocida” y que el estudio es el primero que identifica al intersticio como un órgano “por derecho propio”.

El problema es que el espacio intersticial es conocido desde hace al menos 200 años y los expertos no tienen tan claro que se pueda considerar un órgano.

intersticio

Crédito: Petros C. Benias et al. (Scientific Reports)

“La idea de que este es un ‘órgano nuevo’ o que el estudio ha descubierto algo nuevo y ‘previamente no identificado’ en el cuerpo humano es claramente falsa”, asegura a Hipertextual el investigador de la Universidad de Chicago Mark Westneat, que no ha participado en el trabajo. Este anatomista cita el Experimenta circa statum sanguinis et vasorum in inflammatione de George Kalternbrunner (1826) como ejemplo de menciones tempranas de esta estructura. “Hay miles de publicaciones al respecto y los cirujanos lo conocen bien”, añade.

Westmeat no es el único que ha recibido el hallazgo con escepticismo. En Twitter muchos de sus compañeros se mostraban confusos al no entender dónde estaba la novedad. El investigador de la Escuela Icahn de Medicina de Mount Sinai (Nueva York, EEUU) y coautor del estudio, Neil Theise, ha respondido a las críticas asegurando que el intersticio “nunca ha sido descrito con tanto detalle”. Hipertextual se ha puesto en contacto con él para saber más sobre su postura acerca de la cobertura dada a la noticia, pero al cierre de este artículo no ha obtenido respuesta.

Otros medios se han hecho eco de las dudas despertadas por el estudio. “Los únicos órganos que se hacen estos días son los que aparecen sobre el escenario y hacen música”, aseguraba a The New York Times el director del Laboratorio de Anatomía Humana de la Universidad Rush (EEUU), James Williams.

El intersticio, a menudo olvidado

El investigador del Centro Oncológico MD Anderson de la Universidad de Texas (EEUU) Anirban Maitra pedía cautela en un artículo publicado en The Scientist. “Es justo decir que histólogos y patólogos saben desde hace mucho que hay un espacio intersticial y que este contiene líquido. La afirmación de que se trata de un órgano hasta ahora desconocido, el más grande hasta el momento, parece forzada”.

“La mayoría de biólogos sería reticente a llamar ‘órgano’ a desiguales espacios microscópicos que contienen fluido entre tejidos. Por esa definición, la cavidad abdominal y la pleura también lo serían”, continúa Maitra. En cualquier caso, no es la primera vez que se bautiza así al intersticio.

Novedad o no, órgano o no, el artículo de Theise sí incluye información interesante sobre el intersticio, a menudo olvidado.

cáncer

Fuente: Pixabay.

“Tanto el estudio como la nota de prensa hacen afirmaciones falsas, pero es en general un buen artículo con nuevos datos valiosos”, dice Westmeat. Por ejemplo, los autores proponen que la conexión entre el intersticio y el sistema linfático podría explicar cómo algunos tumores forman metástasis tan rápido una vez que alcanzan este espacio. Por ello, investigadores como Williams lamentan que el ruido del ‘nuevo órgano’ nos distraiga del verdadero interés del descubrimiento.

Este  artículo apareció originalmente en  hipertextual.com.com

Why AlphaGo is not AI

President at Novaquark

What is AI and what is not AI is, to some extent, a matter of definition. There is no denying that AlphaGo and similar deep learning approaches have managed to solve quite hard computational problems in the last years. But is it going to get us to AI, in the sense of a fully general intelligent machine, or “AGI”? Not quite, and here is why.

One of the key issues when building an artificial general intelligence is that it will have to make sense of the world for itself, to develop its own, internal meaning for everything it will encounter, hear, say and do. Failing to do this, you end up with today’s AI programs where all the meaning is actually provided by the designer of the application: the AI basically doesn’t understand what is going on and has a narrow domain of expertize.

The problem of meaning is perhaps the most fundamental problem of AI and has still not been solved today. One of the first to express it was Harnad, in his 1990 paper about “The Symbol Grounding Problem”. Even if you don’t believe we are explicitly manipulating symbols, which is indeed questionable, the problem remains: the grounding of whatever representation exists inside the system into the real world outside.

To be more specific, the problem of meaning leads us to four sub-problems:

  1. How do you structure the information the agent (human or AI) is receiving from the world?
  2. How do you link this structured information to the world, or, taking the above definition, how do you build “meaning” for the agent?
  3. How do you synchronize this meaning with other agents? (otherwise, there is no communication possible and you get an incomprehensible isolated form of intelligence)
  4. Why does the agent do something at all rather than nothing? How to set all this into motion?

The first problem, about structuring information, is very well addressed by deep learning and similar unsupervised learning algorithms, used for example in the AlphaGo program. We have made tremendous progress in this area, in part because of the recent gain in computing power and the use of GPU (Graphical Processing Units) which are especially good at parallelizing information processing. What these algorithms do is taking a signal that is extremely redundant and expressed in a high dimension space, and reduce it to a low dimensionality signal, minimizing the loss of information in the process. In other words, it “captures” what is important in the signal, from an information processing point of view.

The second problem, about linking information to the real world, or creating “meaning”, is fundamentally tied to robotics. Because you need a body to interact with the world, and you need to interact with the world to build this link. That’s why I often say that there is no AI without robotics (while there can be pretty good robotics without AI, but that’s another story). This realization is often called the “embodiment problem” and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues. Every different body has a different form of intelligence and you see that pretty clearly in the animal kingdom. It starts with simple things like making sense of your own body parts, and how you can control them to produce desired effects in the observed world around you, how you build your own notion of space, distance, color, etc. This has been studied extensively by researchers like Kevin O’Regan and his “sensorimotor theory”. It is just a first step however, because then you have to build up more and more abstract concepts, on top of those grounded sensorimotor structures. We are not quite there yet, but that’s the current status of research on that matter.

The third problem is fundamentally the question of the origin of culture. Some animals show some simple form of culture, even transgenerational acquired competencies, but it is very limited and only humans have reached the threshold of exponentially growing acquisition of knowledge that we call culture. Culture is the essential catalyst of intelligence and an AI without the capability to interact culturally would be nothing more than an academic curiosity. However, culture can not be hand coded into a machine, it must be the result of a learning process. The best way to start looking to try to understand this process is in developmental psychology, with the work of Piaget or Tomasello, studying how children acquire cultural competency. It gave birth to a new discipline in robotics called “developmental robotics”, which is taking the child as a model (as illustrated by the iCub robot, pictured above). It is also closely linked to the study of language learning, which is one of the topic on which I mostly worked as a researcher myself. The work of people like Luc Steels and many others have shown that we can see language acquisition as an evolutionary process: the agent creates new meanings by interacting with the world, use them to communicate with other agents, and select the most successful structures that help to communicate (that is, to achieve joint intentions, mostly). After hundreds of trial and error, just like with biological evolution, the system evolves the best meaning and their syntactic/grammatical translation. This process has been tested experimentally and shows striking resemblances with how natural languages evolve and grow. Interestingly, it accounts for instantaneous learning, when a concept is acquired in one shot, something that heavily statistical models like deep learning are not capable to explain. Several research labs are now trying to go further into acquiring grammar, gestures and more complex cultural conventions by this mean, in particular the AI Lab that I founded at Aldebaran.

Finally, the fourth problem deals with what is called “intrinsic motivation”. Why does the agent do anything at all, rather than nothing. Survival requirements are not enough to explain human behavior. Even perfectly fed and secure, humans don’t just sit idle until hunger comes back. There is more, they explore, they try, and it seems to be some kind of intrinsic curiosity. Researchers like Pierre-Yves Oudeyer have shown that simple mathematical formulations of curiosity, as an expression of the tendency of the agent to maximize its rate of learning, are enough to account for incredibly complex and surprising behaviors (see, the Playground experiment done at Sony CSL). It seems that something of the sort is needed inside the system to drive its desire to go through the previous three steps: structure the information of the world, connect it to its body and create meaning, and then select the most communicationally efficient one to create a joint culture that enables cooperation. This is, in my view, the program of AGI.

Again, the advances of deep learning and the recent success of this kind of AI at the game of Go are very good news because lots of very useful applications can be imagined from there to help medical research, the industry, progress in environment preservation and many other issues. But this is only one part of the problem, as I have tried to show here. I don’t believe deep learning is the silver bullet that will get us to true AI, in the sense of a machine that is capable to learn to live in the world, interact naturally with us, understand deeply the complexity of our emotions, cultural biases and ultimately help us to make a better world.

This article originally appeared on LinkedIn

…Searching Always New Horizons.