Oophorectomy is the surgical removal of the ovaries of a female animal. In the case of non-human animals, this is also called spaying. It is a form of sterilization.

The removal of the ovaries together with the Fallopian tubes is called salpingo-oophorectomy. Oophorectomy and salpingo-oophorectomy are not common forms of birth control in humans; more usual is tubal ligation, in which the Fallopian tubes are blocked but the ovaries remain intact.

In humans, oophorectomy is most usually performed together with a hysterectomy - the removal of the uterus. Its use in a hysterectomy when there are no other health problems is somewhat controversial.

In animals, spaying involves an invasive removal of the ovaries, but rarely has major complications; the superstition that it causes weight gain is not based on fact. Spaying is especially important for certain animals that require the ovum to be released at a certain interval (called estrus or "heat"), such as cats and dogs. If the cell is not released during these animal's heat, it can cause severe medical problems that can be averted by spaying or partnering the animal with a male.

Oophorectomy is sometimes referred to as castration, but that term is most often used to mean the removal of a male animal's testicles.

See also

Algorithmic information theory

Algorithmic information theory is a field of study which attempts to capture the concept of complexity by using tools from theoretical computer science. The chief idea is to define the complexity (or Kolmogorov complexity) of a string as the length of the shortest program which outputs that string. Strings that can be produced by short programs are considered to be not very complex. This notion is surprisingly deep and can be used to state and prove impossibility results akin to Gödel's incompleteness theorem and Turing's halting problem.

The field was developed by Andrey Kolmogorov, Ray Solomonoff and Gregory Chaitin starting in the late 1960s. There are several variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs and is due to Leonid Levin (1974).

To formalize the above definition of complexity, one has to specify exactly what types of programs are allowed. Fortunately, it doesn't really matter: one could take a particular notation for Turing machines, or LISP programs, or Pascal programs, or Java virtual machine bytecode. If we agree to measure the lengths of all objects consistently in bits, then the resulting notions of complexity will differ only by a constant term: if K1(s) and K2(s) are the complexities of the string s according to two different programming languages L1 and L2, then there is a constant c (which only depend on the languages chosen, but not on s) such that

K_1(s) \le K_2 (s) + c

Here, c is the length in bits of an interpreter for L2 written in L1. (One technical requirement is that it must be possible to embed arbitrary binary data into programs without delimiters, e.g. by providing such data on "standard input" and considering all bits read from this stream as part of the program.)

In the following, we will fix one definition and simply write K(s) for the complexity of the string s.

The first surprising result is that K(s) cannot be computed: there is no general algorithm which takes a string s as input and produces the number K(s) as output. The proof is a formalization of the amusing Berry paradox: "Let n be the smallest number that cannot be defined in fewer than twenty English words. Well, I just defined it in fewer than twenty English words."

It is however straightforward to compute upper bounds for K(s): simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the resulting string's length.

The next important result is about the randomness of strings. Most strings are complex in the sense that they cannot be significantly compressed: K(s) is not much smaller than |s|, the length of s in bits. The precise statement is as follows: the probability that a random string of length n has complexity less than nk is smaller than 2k. The proof is a counting argument: you count the programs and the strings, and compare. This theorem is the justification for Mike Goldman's challenge in the comp.compression FAQ http://www.faqs.org/faqs/compression-faq/ :

I will attach a prize of $5,000 to anyone who successfully meets this challenge. First, the contestant will tell me HOW LONG of a data file to generate. Second, I will generate the data file, and send it to the contestant. Last, the contestant will send me a decompressor and a compressed file, which will together total in size less than the original data file, and which will be able to restore the compressed file to the original state.
With this offer, you can tune your algorithm to my data. You tell me the parameters of size in advance. All I get to do is arrange the bits within my file according to the dictates of my whim. As a processing fee, I will require an advance deposit of $100 from any contestant. This deposit is 100% refundable if you meet the challenge.

Now for Chaitin's incompleteness result: though we know that most strings are complex in the above sense, the fact that a specific string is complex can never be proven (if the string's length is above a certain threshold). The precise formalization is as follows. Suppose we fix a particular consistent axiomatic system for the natural numbers, say Peano's axioms. Then there exists a constant L (which only depends on the particular axiomatic system and the choice of definition of complexity) such that there is no string s for which the statement

K(s) \ge L

can be proven within the axiomatic system (even though, as we know, the vast majority of those statements must be true). Again, the proof of this result is a formalization of Berry's paradox.

Similar ideas are used to prove the properties of Chaitin's constant.

The minimum message length principle of statistical and inductive inference and machine learning was independently developed by C.S. Wallace http://www.csse.monash.edu.au/~dld/CSWallacePublications/ and D.M. Boulton in 1968. MML is Bayesian (it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (the inference transforms with a re-parameterisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (even for very hard problems, MML will converge to any underlying model) and efficiency (the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace http://www.csse.monash.edu.au/~dld/CSWallacePublications/ and D.L. Dowe showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity) in 1999.

See also

External links

Last updated: 02-07-2005 09:51:40