This HTML5 document contains 32 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n12https://global.dbpedia.org/id/
yagohttp://dbpedia.org/class/yago/
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
goldhttp://purl.org/linguistics/gold/
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Mehrotra_predictor–corrector_method
rdf:type
yago:WikicatOptimizationAlgorithmsAndMethods yago:Act100030358 yago:Abstraction100002137 dbo:Software yago:YagoPermanentlyLocatedEntity yago:PsychologicalFeature100023100 yago:Rule105846932 yago:Activity100407535 yago:Algorithm105847438 yago:Event100029378 yago:Procedure101023820
rdfs:label
Mehrotra predictor–corrector method
rdfs:comment
Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra. The method is based on the fact that at each iteration of an interior point algorithm it is necessary to compute the Cholesky decomposition (factorization) of a large matrix to find the search direction. The factorization step is the most computationally expensive step in the algorithm. Therefore, it makes sense to use the same decomposition more than once before recomputing it.
dcterms:subject
dbc:Optimization_algorithms_and_methods
dbo:wikiPageID
1635098
dbo:wikiPageRevisionID
1062447535
dbo:wikiPageWikiLink
dbc:Optimization_algorithms_and_methods dbr:Quadratic_programming dbr:Iteration dbr:Interior_point_method dbr:Cholesky_decomposition dbr:Optimization_(mathematics) dbr:Linear_programming dbr:Karush–Kuhn–Tucker_conditions
owl:sameAs
n12:4sCHN freebase:m.05jfct wikidata:Q6809859
dbo:abstract
Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra. The method is based on the fact that at each iteration of an interior point algorithm it is necessary to compute the Cholesky decomposition (factorization) of a large matrix to find the search direction. The factorization step is the most computationally expensive step in the algorithm. Therefore, it makes sense to use the same decomposition more than once before recomputing it. At each iteration of the algorithm, Mehrotra's predictor–corrector method uses the same Cholesky decomposition to find two different directions: a predictor and a corrector. The idea is to first compute an optimizing search direction based on a first order term (predictor). The step size that can be taken in this direction is used to evaluate how much centrality correction is needed. Then, a corrector term is computed: this contains both a centrality term and a second order term. The complete search direction is the sum of the predictor direction and the corrector direction. Although there is no theoretical complexity bound on it yet, Mehrotra's predictor–corrector method is widely used in practice. Its corrector step uses the same Cholesky decomposition found during the predictor step in an effective way, and thus it is only marginally more expensive than a standard interior point algorithm. However, the additional overhead per iteration is usually paid off by a reduction in the number of iterations needed to reach an optimal solution. It also appears to converge very fast when close to the optimum.
gold:hypernym
dbr:Implementation
prov:wasDerivedFrom
wikipedia-en:Mehrotra_predictor–corrector_method?oldid=1062447535&ns=0
dbo:wikiPageLength
8666
foaf:isPrimaryTopicOf
wikipedia-en:Mehrotra_predictor–corrector_method