Jump to content

Problem size

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Hfastedge (talk | contribs) at 10:28, 23 September 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

See "what links here" to see where this came from.

This is often loosely defined. AKA input size, it means in english: "whatever it is that you are putting into the algorithm".

In order for you to get an accurate analysis of an algorithm, you need to define your input to be as atomic as possible, and if not, then, state the reasons for your doing so. Example:

If your input is N binary trees, sometimes it might be useful to not include the computational cost of generating those trees in the first place, eg, they were pre-generated, and it requires O(1) or a constant time, to simply read them from memory. However, it might be more worth it to include the cost of generating the trees if you are spending every 10 minutes re-generating them.

These above two example actually show a contrast in approach to analysis, that hint at the downfalls of the Big O notation when applied to increasingly complex (in terms of real world situations) environnments.

Stephen Wolfram